Nov 28 11:53:01 crc systemd[1]: Starting Kubernetes Kubelet... Nov 28 11:53:01 crc restorecon[4702]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:53:01 crc restorecon[4702]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 11:53:01 crc restorecon[4702]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 28 11:53:02 crc kubenswrapper[5030]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 11:53:02 crc kubenswrapper[5030]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 28 11:53:02 crc kubenswrapper[5030]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 11:53:02 crc kubenswrapper[5030]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 11:53:02 crc kubenswrapper[5030]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 28 11:53:02 crc kubenswrapper[5030]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.175503 5030 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182407 5030 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182453 5030 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182493 5030 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182504 5030 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182513 5030 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182522 5030 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182532 5030 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182540 5030 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182549 5030 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182557 5030 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182564 5030 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182572 5030 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182580 5030 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182588 5030 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182599 5030 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182608 5030 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182617 5030 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182627 5030 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182636 5030 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182644 5030 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182652 5030 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182660 5030 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182668 5030 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182676 5030 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182683 5030 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182691 5030 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182699 5030 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182706 5030 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182715 5030 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182723 5030 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182743 5030 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182751 5030 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182759 5030 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182768 5030 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182776 5030 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182784 5030 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182791 5030 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182800 5030 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182807 5030 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182815 5030 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182823 5030 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182830 5030 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182838 5030 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182846 5030 feature_gate.go:330] unrecognized feature gate: Example Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182854 5030 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182862 5030 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182871 5030 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182881 5030 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182889 5030 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182896 5030 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182904 5030 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182912 5030 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182919 5030 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182927 5030 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182934 5030 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182943 5030 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182950 5030 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182961 5030 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182971 5030 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182980 5030 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182988 5030 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.182998 5030 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.183008 5030 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.183017 5030 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.183049 5030 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.183059 5030 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.183069 5030 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.183079 5030 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.183091 5030 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.183103 5030 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.183111 5030 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183522 5030 flags.go:64] FLAG: --address="0.0.0.0" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183544 5030 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183562 5030 flags.go:64] FLAG: --anonymous-auth="true" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183574 5030 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183586 5030 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183595 5030 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183607 5030 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183618 5030 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183629 5030 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183638 5030 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183649 5030 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183659 5030 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183669 5030 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183679 5030 flags.go:64] FLAG: --cgroup-root="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183687 5030 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183697 5030 flags.go:64] FLAG: --client-ca-file="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183708 5030 flags.go:64] FLAG: --cloud-config="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183717 5030 flags.go:64] FLAG: --cloud-provider="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183726 5030 flags.go:64] FLAG: --cluster-dns="[]" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183736 5030 flags.go:64] FLAG: --cluster-domain="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183745 5030 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183754 5030 flags.go:64] FLAG: --config-dir="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183765 5030 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183775 5030 flags.go:64] FLAG: --container-log-max-files="5" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183796 5030 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183806 5030 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183815 5030 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183824 5030 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183834 5030 flags.go:64] FLAG: --contention-profiling="false" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183843 5030 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183852 5030 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183862 5030 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183870 5030 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183881 5030 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183891 5030 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183900 5030 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183909 5030 flags.go:64] FLAG: --enable-load-reader="false" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183918 5030 flags.go:64] FLAG: --enable-server="true" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183927 5030 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183939 5030 flags.go:64] FLAG: --event-burst="100" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183949 5030 flags.go:64] FLAG: --event-qps="50" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183958 5030 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183967 5030 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183976 5030 flags.go:64] FLAG: --eviction-hard="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183987 5030 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.183998 5030 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184010 5030 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184023 5030 flags.go:64] FLAG: --eviction-soft="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184037 5030 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184049 5030 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184062 5030 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184072 5030 flags.go:64] FLAG: --experimental-mounter-path="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184081 5030 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184090 5030 flags.go:64] FLAG: --fail-swap-on="true" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184099 5030 flags.go:64] FLAG: --feature-gates="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184110 5030 flags.go:64] FLAG: --file-check-frequency="20s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184119 5030 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184129 5030 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184138 5030 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184148 5030 flags.go:64] FLAG: --healthz-port="10248" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184157 5030 flags.go:64] FLAG: --help="false" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184167 5030 flags.go:64] FLAG: --hostname-override="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184175 5030 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184185 5030 flags.go:64] FLAG: --http-check-frequency="20s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184194 5030 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184203 5030 flags.go:64] FLAG: --image-credential-provider-config="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184212 5030 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184221 5030 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184230 5030 flags.go:64] FLAG: --image-service-endpoint="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184239 5030 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184248 5030 flags.go:64] FLAG: --kube-api-burst="100" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184257 5030 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184266 5030 flags.go:64] FLAG: --kube-api-qps="50" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184275 5030 flags.go:64] FLAG: --kube-reserved="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184284 5030 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184292 5030 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184302 5030 flags.go:64] FLAG: --kubelet-cgroups="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184312 5030 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184321 5030 flags.go:64] FLAG: --lock-file="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184329 5030 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184339 5030 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184348 5030 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184361 5030 flags.go:64] FLAG: --log-json-split-stream="false" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184374 5030 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184383 5030 flags.go:64] FLAG: --log-text-split-stream="false" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184392 5030 flags.go:64] FLAG: --logging-format="text" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184401 5030 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184411 5030 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184420 5030 flags.go:64] FLAG: --manifest-url="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184429 5030 flags.go:64] FLAG: --manifest-url-header="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184441 5030 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184450 5030 flags.go:64] FLAG: --max-open-files="1000000" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184462 5030 flags.go:64] FLAG: --max-pods="110" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184501 5030 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184510 5030 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184519 5030 flags.go:64] FLAG: --memory-manager-policy="None" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184528 5030 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184539 5030 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184549 5030 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184559 5030 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184580 5030 flags.go:64] FLAG: --node-status-max-images="50" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184589 5030 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184599 5030 flags.go:64] FLAG: --oom-score-adj="-999" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184608 5030 flags.go:64] FLAG: --pod-cidr="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184617 5030 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184630 5030 flags.go:64] FLAG: --pod-manifest-path="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184639 5030 flags.go:64] FLAG: --pod-max-pids="-1" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184649 5030 flags.go:64] FLAG: --pods-per-core="0" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184659 5030 flags.go:64] FLAG: --port="10250" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184669 5030 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184677 5030 flags.go:64] FLAG: --provider-id="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184686 5030 flags.go:64] FLAG: --qos-reserved="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184696 5030 flags.go:64] FLAG: --read-only-port="10255" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184705 5030 flags.go:64] FLAG: --register-node="true" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184714 5030 flags.go:64] FLAG: --register-schedulable="true" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184723 5030 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184738 5030 flags.go:64] FLAG: --registry-burst="10" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184747 5030 flags.go:64] FLAG: --registry-qps="5" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184756 5030 flags.go:64] FLAG: --reserved-cpus="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184766 5030 flags.go:64] FLAG: --reserved-memory="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184778 5030 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184787 5030 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184796 5030 flags.go:64] FLAG: --rotate-certificates="false" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184805 5030 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184814 5030 flags.go:64] FLAG: --runonce="false" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184822 5030 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184832 5030 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184841 5030 flags.go:64] FLAG: --seccomp-default="false" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184850 5030 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184859 5030 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184869 5030 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184878 5030 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184887 5030 flags.go:64] FLAG: --storage-driver-password="root" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184897 5030 flags.go:64] FLAG: --storage-driver-secure="false" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184906 5030 flags.go:64] FLAG: --storage-driver-table="stats" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184914 5030 flags.go:64] FLAG: --storage-driver-user="root" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184923 5030 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184932 5030 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184941 5030 flags.go:64] FLAG: --system-cgroups="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184952 5030 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184966 5030 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184975 5030 flags.go:64] FLAG: --tls-cert-file="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184984 5030 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.184996 5030 flags.go:64] FLAG: --tls-min-version="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.185008 5030 flags.go:64] FLAG: --tls-private-key-file="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.185020 5030 flags.go:64] FLAG: --topology-manager-policy="none" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.185032 5030 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.185044 5030 flags.go:64] FLAG: --topology-manager-scope="container" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.185055 5030 flags.go:64] FLAG: --v="2" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.185071 5030 flags.go:64] FLAG: --version="false" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.185084 5030 flags.go:64] FLAG: --vmodule="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.185097 5030 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.185108 5030 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185313 5030 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185324 5030 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185335 5030 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185343 5030 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185351 5030 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185361 5030 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185371 5030 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185381 5030 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185390 5030 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185400 5030 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185409 5030 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185419 5030 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185433 5030 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185446 5030 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185456 5030 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185498 5030 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185509 5030 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185520 5030 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185531 5030 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185542 5030 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185552 5030 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185562 5030 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185572 5030 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185581 5030 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185591 5030 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185601 5030 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185611 5030 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185621 5030 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185635 5030 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185647 5030 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185657 5030 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185667 5030 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185677 5030 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185687 5030 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185696 5030 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185706 5030 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185715 5030 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185724 5030 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185736 5030 feature_gate.go:330] unrecognized feature gate: Example Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185746 5030 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185757 5030 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185766 5030 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185776 5030 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185786 5030 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185796 5030 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185806 5030 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185817 5030 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185826 5030 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185836 5030 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185846 5030 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185859 5030 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185871 5030 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185881 5030 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185889 5030 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185897 5030 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185905 5030 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185923 5030 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185931 5030 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185941 5030 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185949 5030 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185958 5030 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185969 5030 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185978 5030 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185988 5030 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.185999 5030 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.186008 5030 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.186017 5030 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.186026 5030 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.186035 5030 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.186047 5030 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.186057 5030 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.186073 5030 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.202075 5030 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.202153 5030 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202261 5030 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202273 5030 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202279 5030 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202286 5030 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202291 5030 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202296 5030 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202301 5030 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202308 5030 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202318 5030 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202324 5030 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202329 5030 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202334 5030 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202342 5030 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202351 5030 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202358 5030 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202364 5030 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202371 5030 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202377 5030 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202384 5030 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202390 5030 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202395 5030 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202400 5030 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202405 5030 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202410 5030 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202415 5030 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202420 5030 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202426 5030 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202431 5030 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202436 5030 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202442 5030 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202447 5030 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202452 5030 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202457 5030 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202477 5030 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202485 5030 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202491 5030 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202496 5030 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202501 5030 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202506 5030 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202513 5030 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202519 5030 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202525 5030 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202531 5030 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202536 5030 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202543 5030 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202549 5030 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202555 5030 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202561 5030 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202567 5030 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202572 5030 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202579 5030 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202585 5030 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202590 5030 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202595 5030 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202600 5030 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202605 5030 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202611 5030 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202616 5030 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202620 5030 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202625 5030 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202630 5030 feature_gate.go:330] unrecognized feature gate: Example Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202635 5030 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202640 5030 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202644 5030 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202649 5030 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202654 5030 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202659 5030 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202664 5030 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202668 5030 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202673 5030 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202680 5030 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.202689 5030 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202860 5030 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202868 5030 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202875 5030 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202881 5030 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202886 5030 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202892 5030 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202896 5030 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202902 5030 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202907 5030 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202913 5030 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202920 5030 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202926 5030 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202932 5030 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202937 5030 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202942 5030 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202947 5030 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202952 5030 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202957 5030 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202962 5030 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202967 5030 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202972 5030 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202977 5030 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202982 5030 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202987 5030 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202992 5030 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.202997 5030 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203028 5030 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203033 5030 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203038 5030 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203043 5030 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203049 5030 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203054 5030 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203058 5030 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203065 5030 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203072 5030 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203079 5030 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203085 5030 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203090 5030 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203103 5030 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203109 5030 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203114 5030 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203120 5030 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203126 5030 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203132 5030 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203137 5030 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203142 5030 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203148 5030 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203153 5030 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203158 5030 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203164 5030 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203171 5030 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203178 5030 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203186 5030 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203193 5030 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203200 5030 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203208 5030 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203214 5030 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203221 5030 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203227 5030 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203234 5030 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203239 5030 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203244 5030 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203249 5030 feature_gate.go:330] unrecognized feature gate: Example Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203254 5030 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203259 5030 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203264 5030 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203269 5030 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203275 5030 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203280 5030 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203285 5030 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.203291 5030 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.203300 5030 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.203633 5030 server.go:940] "Client rotation is on, will bootstrap in background" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.207551 5030 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.207671 5030 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.208271 5030 server.go:997] "Starting client certificate rotation" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.208301 5030 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.208526 5030 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-31 00:31:01.585634534 +0000 UTC Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.208652 5030 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 780h37m59.376986174s for next certificate rotation Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.215845 5030 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.217925 5030 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.227269 5030 log.go:25] "Validated CRI v1 runtime API" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.248569 5030 log.go:25] "Validated CRI v1 image API" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.250355 5030 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.255273 5030 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-28-11-49-06-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.255319 5030 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.268998 5030 manager.go:217] Machine: {Timestamp:2025-11-28 11:53:02.26767943 +0000 UTC m=+0.209422133 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c965c05c-761f-4745-b234-194f03087472 BootID:b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:5f:4e:cc Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:5f:4e:cc Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:8e:c8:a4 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:a2:cf:bf Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e1:20:ab Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:81:87:c4 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:b2:b0:b3:1b:c3:d9 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a6:e7:39:76:b4:e2 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.269251 5030 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.269381 5030 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.270321 5030 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.270647 5030 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.270708 5030 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.271000 5030 topology_manager.go:138] "Creating topology manager with none policy" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.271013 5030 container_manager_linux.go:303] "Creating device plugin manager" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.271282 5030 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.271318 5030 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.271788 5030 state_mem.go:36] "Initialized new in-memory state store" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.271905 5030 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.272556 5030 kubelet.go:418] "Attempting to sync node with API server" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.272585 5030 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.272612 5030 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.272630 5030 kubelet.go:324] "Adding apiserver pod source" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.272645 5030 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.274651 5030 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.275194 5030 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.276019 5030 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.276266 5030 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.276275 5030 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 28 11:53:02 crc kubenswrapper[5030]: E1128 11:53:02.276450 5030 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Nov 28 11:53:02 crc kubenswrapper[5030]: E1128 11:53:02.276565 5030 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.276690 5030 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.276718 5030 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.276730 5030 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.276742 5030 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.276760 5030 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.276772 5030 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.276784 5030 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.276832 5030 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.276846 5030 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.276860 5030 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.276894 5030 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.276908 5030 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.277138 5030 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.277822 5030 server.go:1280] "Started kubelet" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.278242 5030 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.278337 5030 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.278341 5030 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.279594 5030 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.279722 5030 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.279752 5030 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.279945 5030 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:45:10.452851665 +0000 UTC Nov 28 11:53:02 crc systemd[1]: Started Kubernetes Kubelet. Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.283275 5030 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 890h52m8.169585887s for next certificate rotation Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.280209 5030 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.280182 5030 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 28 11:53:02 crc kubenswrapper[5030]: E1128 11:53:02.283637 5030 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.283728 5030 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 28 11:53:02 crc kubenswrapper[5030]: E1128 11:53:02.285057 5030 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="200ms" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.285674 5030 server.go:460] "Adding debug handlers to kubelet server" Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.286041 5030 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.287051 5030 factory.go:55] Registering systemd factory Nov 28 11:53:02 crc kubenswrapper[5030]: E1128 11:53:02.286180 5030 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.288309 5030 factory.go:221] Registration of the systemd container factory successfully Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.291531 5030 factory.go:153] Registering CRI-O factory Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.292005 5030 factory.go:221] Registration of the crio container factory successfully Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.292094 5030 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.292120 5030 factory.go:103] Registering Raw factory Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.292139 5030 manager.go:1196] Started watching for new ooms in manager Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.332388 5030 manager.go:319] Starting recovery of all containers Nov 28 11:53:02 crc kubenswrapper[5030]: E1128 11:53:02.288129 5030 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c297fa7f3f92d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 11:53:02.277761325 +0000 UTC m=+0.219504028,LastTimestamp:2025-11-28 11:53:02.277761325 +0000 UTC m=+0.219504028,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345530 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345572 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345583 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345592 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345602 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345611 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345621 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345631 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345641 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345650 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345659 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345668 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345676 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345686 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345695 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345704 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345735 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345743 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345755 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345764 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345774 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345782 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345841 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.345850 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.347699 5030 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.347733 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.347748 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.347781 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.347803 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.347815 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.347833 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.347844 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.347862 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.347874 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.347886 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.347904 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.347915 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.347934 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.347945 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.347958 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.347973 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.347983 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.347999 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348010 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348023 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348038 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348051 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348071 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348083 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348095 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348112 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348123 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348139 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348157 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348175 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348195 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348210 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348230 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348242 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348261 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348273 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348283 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348300 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348312 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348350 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348365 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348378 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348395 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348406 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348422 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348434 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348448 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348475 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348489 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348505 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348516 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348528 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348543 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348554 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348573 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348586 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348600 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348615 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348627 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348643 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348655 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348667 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348683 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348694 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348709 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348719 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348731 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348746 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348758 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348775 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348787 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348798 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348813 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348825 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348839 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348850 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348861 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348874 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348887 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348901 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348917 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348936 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348954 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348971 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.348988 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349005 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349017 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349033 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349046 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349061 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349076 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349086 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349096 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349109 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349124 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349138 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349148 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349158 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349171 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349182 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349196 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349211 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349222 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349236 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349247 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349261 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349272 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349284 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349300 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349311 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349327 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349341 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349353 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349369 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349381 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349395 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349408 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349419 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349434 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349447 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349460 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349482 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349494 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349509 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349520 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349531 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349547 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349559 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349572 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349583 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349594 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349608 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349621 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349636 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349647 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349657 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349670 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349681 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349695 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349705 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349716 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349729 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349743 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349759 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349769 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349780 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349793 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349805 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349820 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349830 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349842 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349856 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349866 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349878 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349888 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349898 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349911 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349923 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349938 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349948 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349958 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349971 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349982 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.349994 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350006 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350016 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350029 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350040 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350053 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350063 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350075 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350090 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350099 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350112 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350123 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350133 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350146 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350156 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350171 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350184 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350197 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350208 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350219 5030 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350229 5030 reconstruct.go:97] "Volume reconstruction finished" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.350239 5030 reconciler.go:26] "Reconciler: start to sync state" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.351341 5030 manager.go:324] Recovery completed Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.367685 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.369228 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.369739 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.369831 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.370776 5030 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.370799 5030 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.370837 5030 state_mem.go:36] "Initialized new in-memory state store" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.381890 5030 policy_none.go:49] "None policy: Start" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.383215 5030 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.383286 5030 state_mem.go:35] "Initializing new in-memory state store" Nov 28 11:53:02 crc kubenswrapper[5030]: E1128 11:53:02.384457 5030 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.389574 5030 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.391616 5030 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.391650 5030 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.391673 5030 kubelet.go:2335] "Starting kubelet main sync loop" Nov 28 11:53:02 crc kubenswrapper[5030]: E1128 11:53:02.391739 5030 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.392977 5030 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 28 11:53:02 crc kubenswrapper[5030]: E1128 11:53:02.393017 5030 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.434760 5030 manager.go:334] "Starting Device Plugin manager" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.434816 5030 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.434831 5030 server.go:79] "Starting device plugin registration server" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.435337 5030 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.435356 5030 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.435634 5030 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.435724 5030 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.435734 5030 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 28 11:53:02 crc kubenswrapper[5030]: E1128 11:53:02.444653 5030 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 11:53:02 crc kubenswrapper[5030]: E1128 11:53:02.486366 5030 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="400ms" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.492546 5030 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.492662 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.494176 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.494227 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.494239 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.494431 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.494785 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.494828 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.495810 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.495840 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.495851 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.496438 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.496494 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.496505 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.496608 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.496789 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.496843 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.497281 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.497309 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.497319 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.497411 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.497665 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.497754 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.498313 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.498340 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.498350 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.498387 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.498427 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.498453 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.498496 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.498563 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.498598 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.500133 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.500144 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.500196 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.500209 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.500224 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.500268 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.500290 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.500161 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.500430 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.500660 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.500707 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.501749 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.501804 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.501827 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.535596 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.536534 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.536564 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.536576 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.536600 5030 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 11:53:02 crc kubenswrapper[5030]: E1128 11:53:02.537253 5030 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.654141 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.654201 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.654247 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.654280 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.654310 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.654339 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.654365 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.654589 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.654667 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.654724 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.654782 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.654831 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.654873 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.654917 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.654995 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.737567 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.740699 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.741127 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.741172 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.741211 5030 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 11:53:02 crc kubenswrapper[5030]: E1128 11:53:02.741955 5030 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.756336 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.756408 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.756455 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.756539 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.756594 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.756598 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.756704 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.756713 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.756623 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.756737 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.756762 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.756703 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.756791 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.756827 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.756607 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.757041 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.757070 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.757049 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.757087 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.757114 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.757129 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.757171 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.757210 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.757251 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.757262 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.757292 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.757317 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.757354 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.757387 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.757535 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.835805 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.862753 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.871298 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.876581 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-f532166c07a78a1f4eb91e6f0b2870b8e5b2680b1c973d0f02ccc05acf7e5196 WatchSource:0}: Error finding container f532166c07a78a1f4eb91e6f0b2870b8e5b2680b1c973d0f02ccc05acf7e5196: Status 404 returned error can't find the container with id f532166c07a78a1f4eb91e6f0b2870b8e5b2680b1c973d0f02ccc05acf7e5196 Nov 28 11:53:02 crc kubenswrapper[5030]: E1128 11:53:02.887786 5030 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="800ms" Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.896153 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.898960 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-7d2722ec7b65dc0c8b510bf2ff961d27f70c78c9b977acb517b5ab37b6003567 WatchSource:0}: Error finding container 7d2722ec7b65dc0c8b510bf2ff961d27f70c78c9b977acb517b5ab37b6003567: Status 404 returned error can't find the container with id 7d2722ec7b65dc0c8b510bf2ff961d27f70c78c9b977acb517b5ab37b6003567 Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.899804 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-f7dcd3285c364bdbc391a21e56e5bfee00e0578bd2a6b5cde9371b7ed8a54455 WatchSource:0}: Error finding container f7dcd3285c364bdbc391a21e56e5bfee00e0578bd2a6b5cde9371b7ed8a54455: Status 404 returned error can't find the container with id f7dcd3285c364bdbc391a21e56e5bfee00e0578bd2a6b5cde9371b7ed8a54455 Nov 28 11:53:02 crc kubenswrapper[5030]: I1128 11:53:02.907249 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.922836 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-ea2a69eece5b88828a356b552a5e1704053cfcc46180abb8754702508c35b10a WatchSource:0}: Error finding container ea2a69eece5b88828a356b552a5e1704053cfcc46180abb8754702508c35b10a: Status 404 returned error can't find the container with id ea2a69eece5b88828a356b552a5e1704053cfcc46180abb8754702508c35b10a Nov 28 11:53:02 crc kubenswrapper[5030]: W1128 11:53:02.933833 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-1a18b47a663f888d0ce0a480a22e0ad02ee1b49f3e9ef1b54536c005a702c83e WatchSource:0}: Error finding container 1a18b47a663f888d0ce0a480a22e0ad02ee1b49f3e9ef1b54536c005a702c83e: Status 404 returned error can't find the container with id 1a18b47a663f888d0ce0a480a22e0ad02ee1b49f3e9ef1b54536c005a702c83e Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.142970 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.145088 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.145144 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.145155 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.145190 5030 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 11:53:03 crc kubenswrapper[5030]: E1128 11:53:03.145826 5030 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.280665 5030 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.398084 5030 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681" exitCode=0 Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.398179 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681"} Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.398555 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1a18b47a663f888d0ce0a480a22e0ad02ee1b49f3e9ef1b54536c005a702c83e"} Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.398697 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.400032 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.400069 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.400083 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.400717 5030 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="a29e78db6cc2e04a56ec70a310fda7bce1ca32eb00ff65221b3eef96fac81afc" exitCode=0 Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.400774 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"a29e78db6cc2e04a56ec70a310fda7bce1ca32eb00ff65221b3eef96fac81afc"} Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.400817 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ea2a69eece5b88828a356b552a5e1704053cfcc46180abb8754702508c35b10a"} Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.400875 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.401645 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.401679 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.401694 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.402597 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.403669 5030 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c" exitCode=0 Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.403718 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c"} Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.403725 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.403743 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.403753 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.403744 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f7dcd3285c364bdbc391a21e56e5bfee00e0578bd2a6b5cde9371b7ed8a54455"} Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.404031 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.410015 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.410093 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.410126 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.415344 5030 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20" exitCode=0 Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.415434 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20"} Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.415495 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7d2722ec7b65dc0c8b510bf2ff961d27f70c78c9b977acb517b5ab37b6003567"} Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.415581 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.416565 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.416605 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.416625 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.417348 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570"} Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.417373 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f532166c07a78a1f4eb91e6f0b2870b8e5b2680b1c973d0f02ccc05acf7e5196"} Nov 28 11:53:03 crc kubenswrapper[5030]: W1128 11:53:03.546849 5030 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 28 11:53:03 crc kubenswrapper[5030]: E1128 11:53:03.546970 5030 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Nov 28 11:53:03 crc kubenswrapper[5030]: W1128 11:53:03.681584 5030 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 28 11:53:03 crc kubenswrapper[5030]: E1128 11:53:03.681664 5030 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Nov 28 11:53:03 crc kubenswrapper[5030]: E1128 11:53:03.688677 5030 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="1.6s" Nov 28 11:53:03 crc kubenswrapper[5030]: W1128 11:53:03.801252 5030 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 28 11:53:03 crc kubenswrapper[5030]: E1128 11:53:03.801343 5030 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Nov 28 11:53:03 crc kubenswrapper[5030]: W1128 11:53:03.807248 5030 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Nov 28 11:53:03 crc kubenswrapper[5030]: E1128 11:53:03.807346 5030 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.946810 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.949291 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.949337 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.949395 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:03 crc kubenswrapper[5030]: I1128 11:53:03.949439 5030 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 11:53:03 crc kubenswrapper[5030]: E1128 11:53:03.958989 5030 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.421664 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d15347ebd6790bbea101cf7c1648c4dca835235e58135b355c07606ec6c449ee"} Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.421842 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.422907 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.422955 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.422965 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.423949 5030 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff" exitCode=0 Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.424058 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.424088 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff"} Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.424972 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.425014 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.425034 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.427332 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"115d1d02ee85fac531c03ead7408d14eee3d97a5ded22b9c667d533ab91d5a61"} Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.427402 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9c3e0ee0c11239d02d532be8f53740151a5473ce01cfeff9bfd74d14fd2f23e8"} Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.427414 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4941837db92a86711049d8127c0c54d85666d4657fd632275b753d6cf824402a"} Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.427567 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.428976 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.429057 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.429069 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.434228 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d"} Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.434295 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c"} Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.434303 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.434307 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a"} Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.435067 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.435088 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.435100 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.442921 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa"} Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.442944 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738"} Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.442955 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83"} Nov 28 11:53:04 crc kubenswrapper[5030]: I1128 11:53:04.442965 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0"} Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.453014 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b"} Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.453190 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.455217 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.455301 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.455329 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.458221 5030 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3" exitCode=0 Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.458322 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3"} Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.458370 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.458383 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.459911 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.459960 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.460001 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.460024 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.460002 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.460130 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.496990 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.559199 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.561332 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.561397 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.561418 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.561503 5030 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 11:53:05 crc kubenswrapper[5030]: I1128 11:53:05.822798 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:53:06 crc kubenswrapper[5030]: I1128 11:53:06.468928 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289"} Nov 28 11:53:06 crc kubenswrapper[5030]: I1128 11:53:06.468979 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93"} Nov 28 11:53:06 crc kubenswrapper[5030]: I1128 11:53:06.468993 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8"} Nov 28 11:53:06 crc kubenswrapper[5030]: I1128 11:53:06.469089 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:06 crc kubenswrapper[5030]: I1128 11:53:06.469106 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:06 crc kubenswrapper[5030]: I1128 11:53:06.470565 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:06 crc kubenswrapper[5030]: I1128 11:53:06.470604 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:06 crc kubenswrapper[5030]: I1128 11:53:06.470608 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:06 crc kubenswrapper[5030]: I1128 11:53:06.470643 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:06 crc kubenswrapper[5030]: I1128 11:53:06.470617 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:06 crc kubenswrapper[5030]: I1128 11:53:06.470657 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:07 crc kubenswrapper[5030]: I1128 11:53:07.310878 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:53:07 crc kubenswrapper[5030]: I1128 11:53:07.477796 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196"} Nov 28 11:53:07 crc kubenswrapper[5030]: I1128 11:53:07.477880 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384"} Nov 28 11:53:07 crc kubenswrapper[5030]: I1128 11:53:07.477932 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:07 crc kubenswrapper[5030]: I1128 11:53:07.477933 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:07 crc kubenswrapper[5030]: I1128 11:53:07.479866 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:07 crc kubenswrapper[5030]: I1128 11:53:07.479926 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:07 crc kubenswrapper[5030]: I1128 11:53:07.479985 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:07 crc kubenswrapper[5030]: I1128 11:53:07.479875 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:07 crc kubenswrapper[5030]: I1128 11:53:07.480096 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:07 crc kubenswrapper[5030]: I1128 11:53:07.480113 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:07 crc kubenswrapper[5030]: I1128 11:53:07.743325 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:53:07 crc kubenswrapper[5030]: I1128 11:53:07.743619 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:07 crc kubenswrapper[5030]: I1128 11:53:07.745396 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:07 crc kubenswrapper[5030]: I1128 11:53:07.745568 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:07 crc kubenswrapper[5030]: I1128 11:53:07.745651 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:07 crc kubenswrapper[5030]: I1128 11:53:07.877657 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 28 11:53:08 crc kubenswrapper[5030]: I1128 11:53:08.365098 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:53:08 crc kubenswrapper[5030]: I1128 11:53:08.365425 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:08 crc kubenswrapper[5030]: I1128 11:53:08.367356 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:08 crc kubenswrapper[5030]: I1128 11:53:08.367428 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:08 crc kubenswrapper[5030]: I1128 11:53:08.367450 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:08 crc kubenswrapper[5030]: I1128 11:53:08.480678 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:08 crc kubenswrapper[5030]: I1128 11:53:08.480682 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:08 crc kubenswrapper[5030]: I1128 11:53:08.482632 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:08 crc kubenswrapper[5030]: I1128 11:53:08.482659 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:08 crc kubenswrapper[5030]: I1128 11:53:08.482693 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:08 crc kubenswrapper[5030]: I1128 11:53:08.482711 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:08 crc kubenswrapper[5030]: I1128 11:53:08.482697 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:08 crc kubenswrapper[5030]: I1128 11:53:08.482767 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:09 crc kubenswrapper[5030]: I1128 11:53:09.484050 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:09 crc kubenswrapper[5030]: I1128 11:53:09.488328 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:09 crc kubenswrapper[5030]: I1128 11:53:09.488421 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:09 crc kubenswrapper[5030]: I1128 11:53:09.488452 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:09 crc kubenswrapper[5030]: I1128 11:53:09.979237 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:53:09 crc kubenswrapper[5030]: I1128 11:53:09.979717 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:09 crc kubenswrapper[5030]: I1128 11:53:09.981689 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:09 crc kubenswrapper[5030]: I1128 11:53:09.981767 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:09 crc kubenswrapper[5030]: I1128 11:53:09.981797 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:10 crc kubenswrapper[5030]: I1128 11:53:10.319023 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:53:10 crc kubenswrapper[5030]: I1128 11:53:10.319409 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:10 crc kubenswrapper[5030]: I1128 11:53:10.321023 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:10 crc kubenswrapper[5030]: I1128 11:53:10.321080 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:10 crc kubenswrapper[5030]: I1128 11:53:10.321099 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:10 crc kubenswrapper[5030]: I1128 11:53:10.327558 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:53:10 crc kubenswrapper[5030]: I1128 11:53:10.486905 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:10 crc kubenswrapper[5030]: I1128 11:53:10.488090 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:10 crc kubenswrapper[5030]: I1128 11:53:10.488163 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:10 crc kubenswrapper[5030]: I1128 11:53:10.488184 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:11 crc kubenswrapper[5030]: I1128 11:53:11.183792 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:53:11 crc kubenswrapper[5030]: I1128 11:53:11.490272 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:11 crc kubenswrapper[5030]: I1128 11:53:11.491429 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:11 crc kubenswrapper[5030]: I1128 11:53:11.491462 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:11 crc kubenswrapper[5030]: I1128 11:53:11.491493 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:11 crc kubenswrapper[5030]: I1128 11:53:11.750127 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 28 11:53:11 crc kubenswrapper[5030]: I1128 11:53:11.750360 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:11 crc kubenswrapper[5030]: I1128 11:53:11.751800 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:11 crc kubenswrapper[5030]: I1128 11:53:11.751834 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:11 crc kubenswrapper[5030]: I1128 11:53:11.751854 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:12 crc kubenswrapper[5030]: E1128 11:53:12.444750 5030 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 11:53:14 crc kubenswrapper[5030]: E1128 11:53:14.139358 5030 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.187c297fa7f3f92d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 11:53:02.277761325 +0000 UTC m=+0.219504028,LastTimestamp:2025-11-28 11:53:02.277761325 +0000 UTC m=+0.219504028,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 11:53:14 crc kubenswrapper[5030]: I1128 11:53:14.184760 5030 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 11:53:14 crc kubenswrapper[5030]: I1128 11:53:14.184895 5030 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 11:53:14 crc kubenswrapper[5030]: I1128 11:53:14.281597 5030 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 28 11:53:15 crc kubenswrapper[5030]: I1128 11:53:15.254144 5030 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 28 11:53:15 crc kubenswrapper[5030]: I1128 11:53:15.254245 5030 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 11:53:15 crc kubenswrapper[5030]: I1128 11:53:15.259809 5030 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 28 11:53:15 crc kubenswrapper[5030]: I1128 11:53:15.259907 5030 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 11:53:17 crc kubenswrapper[5030]: I1128 11:53:17.317521 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:53:17 crc kubenswrapper[5030]: I1128 11:53:17.317760 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:17 crc kubenswrapper[5030]: I1128 11:53:17.319185 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:17 crc kubenswrapper[5030]: I1128 11:53:17.319248 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:17 crc kubenswrapper[5030]: I1128 11:53:17.319271 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:17 crc kubenswrapper[5030]: I1128 11:53:17.325213 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:53:17 crc kubenswrapper[5030]: I1128 11:53:17.505969 5030 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 11:53:17 crc kubenswrapper[5030]: I1128 11:53:17.506032 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:17 crc kubenswrapper[5030]: I1128 11:53:17.507272 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:17 crc kubenswrapper[5030]: I1128 11:53:17.507315 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:17 crc kubenswrapper[5030]: I1128 11:53:17.507325 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:17 crc kubenswrapper[5030]: I1128 11:53:17.750685 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:53:17 crc kubenswrapper[5030]: I1128 11:53:17.751387 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:17 crc kubenswrapper[5030]: I1128 11:53:17.752853 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:17 crc kubenswrapper[5030]: I1128 11:53:17.752932 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:17 crc kubenswrapper[5030]: I1128 11:53:17.752953 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:20 crc kubenswrapper[5030]: E1128 11:53:20.244643 5030 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.246906 5030 trace.go:236] Trace[771063204]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 11:53:06.306) (total time: 13940ms): Nov 28 11:53:20 crc kubenswrapper[5030]: Trace[771063204]: ---"Objects listed" error: 13940ms (11:53:20.246) Nov 28 11:53:20 crc kubenswrapper[5030]: Trace[771063204]: [13.940259779s] [13.940259779s] END Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.246945 5030 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.248539 5030 trace.go:236] Trace[625807209]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 11:53:06.079) (total time: 14168ms): Nov 28 11:53:20 crc kubenswrapper[5030]: Trace[625807209]: ---"Objects listed" error: 14168ms (11:53:20.248) Nov 28 11:53:20 crc kubenswrapper[5030]: Trace[625807209]: [14.16862269s] [14.16862269s] END Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.248578 5030 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 28 11:53:20 crc kubenswrapper[5030]: E1128 11:53:20.249561 5030 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.250103 5030 trace.go:236] Trace[1489880978]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 11:53:06.506) (total time: 13743ms): Nov 28 11:53:20 crc kubenswrapper[5030]: Trace[1489880978]: ---"Objects listed" error: 13743ms (11:53:20.250) Nov 28 11:53:20 crc kubenswrapper[5030]: Trace[1489880978]: [13.74386315s] [13.74386315s] END Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.250131 5030 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.250351 5030 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.252026 5030 trace.go:236] Trace[1856657332]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 11:53:06.258) (total time: 13993ms): Nov 28 11:53:20 crc kubenswrapper[5030]: Trace[1856657332]: ---"Objects listed" error: 13993ms (11:53:20.251) Nov 28 11:53:20 crc kubenswrapper[5030]: Trace[1856657332]: [13.993786459s] [13.993786459s] END Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.252062 5030 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.284049 5030 apiserver.go:52] "Watching apiserver" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.288886 5030 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.289208 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.289710 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:20 crc kubenswrapper[5030]: E1128 11:53:20.289795 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.289710 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:20 crc kubenswrapper[5030]: E1128 11:53:20.289965 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.290208 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.290393 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.290900 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:20 crc kubenswrapper[5030]: E1128 11:53:20.298851 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.299111 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.301314 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.301391 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.301578 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.301752 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.301839 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.301995 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.302400 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.304311 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.305513 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.342159 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.360275 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.378835 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.385216 5030 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.391643 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.405498 5030 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46948->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.405575 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46948->192.168.126.11:17697: read: connection reset by peer" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.405901 5030 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.405923 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.412303 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.424832 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.452532 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.452588 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.452613 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.452668 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.452690 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.452963 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.453038 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.453271 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.453320 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.453362 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.453628 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.453675 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.453705 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.453731 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.453750 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.453771 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.453795 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.453826 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.453855 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.453878 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.453900 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.453926 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.453955 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.453977 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454031 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454053 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454073 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454095 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454118 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454137 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454130 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454159 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454182 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454203 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454248 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454268 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454286 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454303 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454339 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454358 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454394 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454410 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454430 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454434 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454447 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454489 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454508 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454529 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454553 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454573 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454590 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454610 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454644 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454660 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454656 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454676 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454716 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454733 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454742 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454754 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454842 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454864 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454881 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454918 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454920 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454965 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.454992 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455018 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455042 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455068 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455073 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455097 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455122 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455151 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455176 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455204 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455229 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455233 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455254 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455277 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455299 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455320 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455341 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455366 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455385 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455396 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455541 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455555 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455573 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455603 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455628 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455651 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455679 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455702 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455704 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455738 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455759 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455778 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455796 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455814 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455857 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455880 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455898 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455917 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455938 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455958 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455974 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.455978 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456087 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456111 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456137 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456158 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456180 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456198 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456215 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456218 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456237 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456257 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456277 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456276 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456306 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456327 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456348 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456369 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456388 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456407 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456426 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456445 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456475 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456493 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456517 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456539 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456558 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456575 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456590 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456614 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456630 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456648 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456668 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456685 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456702 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456723 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456743 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456759 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456778 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456794 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456811 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456828 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456844 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456860 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456877 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456896 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456913 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456931 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456948 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456966 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.456985 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457002 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457026 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457043 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457060 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457078 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457097 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457109 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457119 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457139 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457156 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457175 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457196 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457216 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457236 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457254 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457276 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457281 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457292 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457300 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457313 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457383 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457419 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457452 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457515 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457567 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457603 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457628 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457664 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457702 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457729 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457765 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457800 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457832 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457866 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457898 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457935 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457972 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.458021 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.458058 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.458093 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.458127 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.458161 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.458194 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.458231 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.458241 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.458593 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.458608 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.458945 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.458952 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.459312 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.459317 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.459437 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.459442 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.459606 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.459731 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.459872 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.459965 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.460048 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.460097 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.460274 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.460294 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.460380 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.460513 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.460590 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.460605 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.460706 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.460748 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.460844 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.460915 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.461042 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.461164 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.461320 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.461357 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.461521 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.457465 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.461886 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.462217 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.462380 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.462564 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.462690 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.462834 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.462908 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.462930 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.463091 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.463112 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.463325 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.463343 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.463568 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.463576 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.463839 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.463900 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.463952 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.464320 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.464404 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.464578 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.464454 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.464784 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.464989 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.465006 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.465144 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.465278 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.465292 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.465527 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.465661 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.465799 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.465920 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.461621 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466235 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466283 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466319 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466353 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466392 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466430 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466494 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466532 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466576 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466615 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466649 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466682 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466710 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466736 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466761 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466828 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466869 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466909 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466940 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466968 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.466999 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467027 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467053 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467080 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467105 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467130 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467155 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467180 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467204 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467299 5030 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467323 5030 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467341 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467356 5030 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467370 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467385 5030 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467399 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467414 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467429 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467444 5030 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467458 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467506 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467524 5030 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467543 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467562 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467579 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467597 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467616 5030 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467633 5030 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467650 5030 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467666 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467680 5030 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467693 5030 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467707 5030 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467721 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467736 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467755 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467774 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467796 5030 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467815 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467841 5030 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467861 5030 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467880 5030 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467900 5030 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467926 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467946 5030 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467962 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467981 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.467998 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.468016 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.468034 5030 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.468057 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.468081 5030 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.468108 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.468132 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.468151 5030 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.468174 5030 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.468195 5030 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.468215 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.468231 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.468245 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.468263 5030 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.468278 5030 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.468293 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.469444 5030 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.470903 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.471262 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.471343 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: E1128 11:53:20.471546 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:53:20.971436086 +0000 UTC m=+18.913178769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.471550 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.471572 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.472003 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.472347 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.472427 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.472450 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.472734 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.472947 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.472952 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.473958 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.474196 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.474399 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.474652 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.474695 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.474815 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.474872 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.475017 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.475106 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.475359 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.475995 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.477232 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.477397 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.477561 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.478302 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.478566 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.478855 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.478982 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.479328 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.479360 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.479769 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.480170 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.481658 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.482015 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.482351 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.482661 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.483082 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.483353 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.483759 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.484092 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.484703 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.484841 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.484913 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.485375 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.485382 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.485539 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.485668 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.485731 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.485878 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: E1128 11:53:20.492673 5030 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:53:20 crc kubenswrapper[5030]: E1128 11:53:20.492786 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:20.992762468 +0000 UTC m=+18.934505141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.499400 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.500138 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:53:20 crc kubenswrapper[5030]: E1128 11:53:20.500441 5030 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:53:20 crc kubenswrapper[5030]: E1128 11:53:20.500602 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:21.000577913 +0000 UTC m=+18.942320596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.501878 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.505135 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.506330 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.511707 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.512047 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.512107 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.512541 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.513273 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.514631 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.513457 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.513610 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.515036 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.515505 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.515764 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.515979 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.516024 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.516591 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.516807 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.516849 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.518066 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.518118 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.518459 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.518605 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.518973 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.518999 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.519277 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.519358 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.519615 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.523838 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: E1128 11:53:20.523997 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:53:20 crc kubenswrapper[5030]: E1128 11:53:20.524140 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:53:20 crc kubenswrapper[5030]: E1128 11:53:20.524168 5030 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:20 crc kubenswrapper[5030]: E1128 11:53:20.524376 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:21.024341649 +0000 UTC m=+18.966084352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:20 crc kubenswrapper[5030]: E1128 11:53:20.524556 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:53:20 crc kubenswrapper[5030]: E1128 11:53:20.524577 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:53:20 crc kubenswrapper[5030]: E1128 11:53:20.524588 5030 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:20 crc kubenswrapper[5030]: E1128 11:53:20.524637 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:21.024624546 +0000 UTC m=+18.966367229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.525069 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.525781 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.526207 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.526397 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.528923 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.530595 5030 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b" exitCode=255 Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.530684 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.530688 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b"} Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.531425 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.532862 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.537733 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.538047 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.538152 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.538391 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.538634 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.539608 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.539853 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.540562 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.540844 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.541308 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.541398 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.543700 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.543946 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.544645 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.546627 5030 scope.go:117] "RemoveContainer" containerID="8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.546916 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.547166 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.547250 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.547321 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.547571 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.547658 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.547668 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.547762 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.547853 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.548623 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.548915 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.549715 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.550073 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.551779 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.560668 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.568600 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.569023 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.569168 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.569267 5030 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.569330 5030 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.569389 5030 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.569450 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.569539 5030 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.569595 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.569648 5030 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.569706 5030 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.569766 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.569820 5030 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.569875 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.569928 5030 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.569980 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.570038 5030 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.570095 5030 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.570153 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.570213 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.570274 5030 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.570407 5030 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.570485 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.570560 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.570618 5030 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.570672 5030 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.570732 5030 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.570794 5030 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.570846 5030 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.571663 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.571751 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.571813 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.571892 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.571957 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.569268 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.572019 5030 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.572085 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.572101 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.572114 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.572131 5030 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.572142 5030 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.572154 5030 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.572166 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.572178 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.572190 5030 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.572201 5030 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.572211 5030 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.572224 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.572235 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.572244 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.569301 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.572255 5030 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573225 5030 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573294 5030 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573308 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573348 5030 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573361 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573373 5030 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573386 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573416 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573428 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573440 5030 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573453 5030 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573489 5030 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573503 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573518 5030 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573531 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573543 5030 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573556 5030 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573568 5030 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573579 5030 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573590 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573600 5030 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573623 5030 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573659 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573671 5030 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573854 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573894 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573906 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573917 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573928 5030 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573939 5030 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573952 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573962 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573972 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.573983 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.574012 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.574025 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.574037 5030 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.577377 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.577521 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582086 5030 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582244 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582267 5030 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582291 5030 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582437 5030 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582487 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582508 5030 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582529 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582553 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582574 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582593 5030 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582611 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582628 5030 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582646 5030 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582663 5030 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582679 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582696 5030 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582712 5030 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582729 5030 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582747 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582765 5030 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582781 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582797 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582813 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582829 5030 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582844 5030 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582860 5030 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582878 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582897 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582919 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582936 5030 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582952 5030 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.582968 5030 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.584091 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.584613 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.584974 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.585033 5030 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.585101 5030 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.585152 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.585192 5030 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.585241 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.585267 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.585286 5030 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.585653 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.585690 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.585864 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.586516 5030 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.586551 5030 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.586580 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.586596 5030 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.586606 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.586620 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.586633 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.586646 5030 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.586659 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.598436 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.612711 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.616303 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.627351 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.627430 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.633024 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.687613 5030 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.687653 5030 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.687667 5030 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:53:20 crc kubenswrapper[5030]: I1128 11:53:20.990354 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:53:20 crc kubenswrapper[5030]: E1128 11:53:20.990559 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:53:21.99051866 +0000 UTC m=+19.932261343 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.091675 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.091726 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.091746 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.091762 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:21 crc kubenswrapper[5030]: E1128 11:53:21.091868 5030 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:53:21 crc kubenswrapper[5030]: E1128 11:53:21.091923 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:22.091905469 +0000 UTC m=+20.033648152 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:53:21 crc kubenswrapper[5030]: E1128 11:53:21.091923 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:53:21 crc kubenswrapper[5030]: E1128 11:53:21.091958 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:53:21 crc kubenswrapper[5030]: E1128 11:53:21.091975 5030 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:21 crc kubenswrapper[5030]: E1128 11:53:21.091986 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:53:21 crc kubenswrapper[5030]: E1128 11:53:21.091998 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:53:21 crc kubenswrapper[5030]: E1128 11:53:21.092017 5030 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:21 crc kubenswrapper[5030]: E1128 11:53:21.092039 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:22.092033323 +0000 UTC m=+20.033776006 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:21 crc kubenswrapper[5030]: E1128 11:53:21.092053 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:22.092046783 +0000 UTC m=+20.033789466 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:21 crc kubenswrapper[5030]: E1128 11:53:21.092082 5030 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:53:21 crc kubenswrapper[5030]: E1128 11:53:21.092148 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:22.092100415 +0000 UTC m=+20.033843098 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.107047 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-7w8nl"] Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.107353 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7w8nl" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.107764 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-cqr62"] Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.108100 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.109410 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.109733 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.109889 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.109997 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.110859 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.112068 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.112428 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.112549 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.127312 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.139456 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.151253 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.161415 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.172903 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.183116 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.188188 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.191643 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.192290 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.196823 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.202431 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.222996 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.240856 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.255369 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.267564 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.279507 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.293032 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.293364 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d8e6d4c7-9635-4925-bf75-96379201ef67-proxy-tls\") pod \"machine-config-daemon-cqr62\" (UID: \"d8e6d4c7-9635-4925-bf75-96379201ef67\") " pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.293426 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krcw6\" (UniqueName: \"kubernetes.io/projected/cb9b76b5-26c0-4a17-a384-356a8b82fed4-kube-api-access-krcw6\") pod \"node-resolver-7w8nl\" (UID: \"cb9b76b5-26c0-4a17-a384-356a8b82fed4\") " pod="openshift-dns/node-resolver-7w8nl" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.293519 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cb9b76b5-26c0-4a17-a384-356a8b82fed4-hosts-file\") pod \"node-resolver-7w8nl\" (UID: \"cb9b76b5-26c0-4a17-a384-356a8b82fed4\") " pod="openshift-dns/node-resolver-7w8nl" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.293547 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d8e6d4c7-9635-4925-bf75-96379201ef67-rootfs\") pod \"machine-config-daemon-cqr62\" (UID: \"d8e6d4c7-9635-4925-bf75-96379201ef67\") " pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.293599 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8e6d4c7-9635-4925-bf75-96379201ef67-mcd-auth-proxy-config\") pod \"machine-config-daemon-cqr62\" (UID: \"d8e6d4c7-9635-4925-bf75-96379201ef67\") " pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.293619 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm28r\" (UniqueName: \"kubernetes.io/projected/d8e6d4c7-9635-4925-bf75-96379201ef67-kube-api-access-bm28r\") pod \"machine-config-daemon-cqr62\" (UID: \"d8e6d4c7-9635-4925-bf75-96379201ef67\") " pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.307003 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.321189 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.346313 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.358552 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.394870 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8e6d4c7-9635-4925-bf75-96379201ef67-mcd-auth-proxy-config\") pod \"machine-config-daemon-cqr62\" (UID: \"d8e6d4c7-9635-4925-bf75-96379201ef67\") " pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.394927 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm28r\" (UniqueName: \"kubernetes.io/projected/d8e6d4c7-9635-4925-bf75-96379201ef67-kube-api-access-bm28r\") pod \"machine-config-daemon-cqr62\" (UID: \"d8e6d4c7-9635-4925-bf75-96379201ef67\") " pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.394966 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d8e6d4c7-9635-4925-bf75-96379201ef67-proxy-tls\") pod \"machine-config-daemon-cqr62\" (UID: \"d8e6d4c7-9635-4925-bf75-96379201ef67\") " pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.395861 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8e6d4c7-9635-4925-bf75-96379201ef67-mcd-auth-proxy-config\") pod \"machine-config-daemon-cqr62\" (UID: \"d8e6d4c7-9635-4925-bf75-96379201ef67\") " pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.394988 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krcw6\" (UniqueName: \"kubernetes.io/projected/cb9b76b5-26c0-4a17-a384-356a8b82fed4-kube-api-access-krcw6\") pod \"node-resolver-7w8nl\" (UID: \"cb9b76b5-26c0-4a17-a384-356a8b82fed4\") " pod="openshift-dns/node-resolver-7w8nl" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.396047 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cb9b76b5-26c0-4a17-a384-356a8b82fed4-hosts-file\") pod \"node-resolver-7w8nl\" (UID: \"cb9b76b5-26c0-4a17-a384-356a8b82fed4\") " pod="openshift-dns/node-resolver-7w8nl" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.396076 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d8e6d4c7-9635-4925-bf75-96379201ef67-rootfs\") pod \"machine-config-daemon-cqr62\" (UID: \"d8e6d4c7-9635-4925-bf75-96379201ef67\") " pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.396156 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d8e6d4c7-9635-4925-bf75-96379201ef67-rootfs\") pod \"machine-config-daemon-cqr62\" (UID: \"d8e6d4c7-9635-4925-bf75-96379201ef67\") " pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.396168 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cb9b76b5-26c0-4a17-a384-356a8b82fed4-hosts-file\") pod \"node-resolver-7w8nl\" (UID: \"cb9b76b5-26c0-4a17-a384-356a8b82fed4\") " pod="openshift-dns/node-resolver-7w8nl" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.399390 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d8e6d4c7-9635-4925-bf75-96379201ef67-proxy-tls\") pod \"machine-config-daemon-cqr62\" (UID: \"d8e6d4c7-9635-4925-bf75-96379201ef67\") " pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.411406 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krcw6\" (UniqueName: \"kubernetes.io/projected/cb9b76b5-26c0-4a17-a384-356a8b82fed4-kube-api-access-krcw6\") pod \"node-resolver-7w8nl\" (UID: \"cb9b76b5-26c0-4a17-a384-356a8b82fed4\") " pod="openshift-dns/node-resolver-7w8nl" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.421399 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7w8nl" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.423041 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm28r\" (UniqueName: \"kubernetes.io/projected/d8e6d4c7-9635-4925-bf75-96379201ef67-kube-api-access-bm28r\") pod \"machine-config-daemon-cqr62\" (UID: \"d8e6d4c7-9635-4925-bf75-96379201ef67\") " pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.427905 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 11:53:21 crc kubenswrapper[5030]: W1128 11:53:21.437967 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb9b76b5_26c0_4a17_a384_356a8b82fed4.slice/crio-38621e93e79278af0b1b8d208a7fa6d71226ffaab8ccd247b9ca89d30405c1bc WatchSource:0}: Error finding container 38621e93e79278af0b1b8d208a7fa6d71226ffaab8ccd247b9ca89d30405c1bc: Status 404 returned error can't find the container with id 38621e93e79278af0b1b8d208a7fa6d71226ffaab8ccd247b9ca89d30405c1bc Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.472693 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-cx2sr"] Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.473571 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-kfz78"] Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.473713 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8vnfr"] Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.474654 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.475418 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.475839 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.479694 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.481881 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.487698 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.489226 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.489439 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.489623 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.490111 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.490517 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.491245 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.491403 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.491596 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.491808 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.491916 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.494642 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.509717 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.526971 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.537324 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8"} Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.537390 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5c9ba8a63814dfe4f41cde9f75e3084858866f0b3c9ce5da451a03a11ce889a1"} Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.539566 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a6c63836a546ca023648e75a6ba8313dce38a69ce16cd51d7ec27e3194cdc30a"} Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.541748 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178"} Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.541774 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f"} Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.541785 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"450ffd7a42bde2a47ef60bcd9f945a61557185d6ea376c61d07879e4bf3354c8"} Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.543227 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.548290 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.556703 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b"} Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.556789 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.559586 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.563035 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerStarted","Data":"134203a6212534c45acae2b849a58a58831c220dcba485700e2111c1e3847d6b"} Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.565834 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7w8nl" event={"ID":"cb9b76b5-26c0-4a17-a384-356a8b82fed4","Type":"ContainerStarted","Data":"38621e93e79278af0b1b8d208a7fa6d71226ffaab8ccd247b9ca89d30405c1bc"} Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.573441 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: E1128 11:53:21.573546 5030 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.584838 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.598455 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.598548 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-run-openvswitch\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.598590 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7e46bfdf-4891-4bd6-8c51-3453013f5285-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.598619 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-os-release\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.598643 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-host-run-k8s-cni-cncf-io\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.598771 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-etc-openvswitch\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.598849 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7e46bfdf-4891-4bd6-8c51-3453013f5285-cnibin\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.598884 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-ovnkube-script-lib\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.598921 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-system-cni-dir\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.598952 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-hostroot\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.599006 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-systemd-units\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.599066 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-var-lib-openvswitch\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.599147 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-ovnkube-config\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.599228 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-ovn-node-metrics-cert\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.599299 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7e46bfdf-4891-4bd6-8c51-3453013f5285-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.599383 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-run-ovn\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.599430 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-node-log\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.600183 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.600349 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-run-systemd\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.600444 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-cnibin\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.600632 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-host-var-lib-kubelet\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.600679 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-log-socket\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.600709 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-cni-netd\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.600780 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-multus-cni-dir\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.601457 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs9fd\" (UniqueName: \"kubernetes.io/projected/4ee84379-3754-48c5-aaab-15dbc36caa16-kube-api-access-zs9fd\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.601569 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7e46bfdf-4891-4bd6-8c51-3453013f5285-cni-binary-copy\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.601609 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xgmb\" (UniqueName: \"kubernetes.io/projected/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-kube-api-access-9xgmb\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.601669 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-multus-socket-dir-parent\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.601716 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-host-run-netns\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.601748 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-multus-conf-dir\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.601805 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-etc-kubernetes\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.601836 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-host-run-multus-certs\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.601857 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-kubelet\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.601875 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-run-netns\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.601894 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-host-var-lib-cni-multus\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.601913 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4ee84379-3754-48c5-aaab-15dbc36caa16-multus-daemon-config\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.601930 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-slash\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.601946 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-cni-bin\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.602010 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7e46bfdf-4891-4bd6-8c51-3453013f5285-os-release\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.602037 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4ee84379-3754-48c5-aaab-15dbc36caa16-cni-binary-copy\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.602073 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rsx2\" (UniqueName: \"kubernetes.io/projected/7e46bfdf-4891-4bd6-8c51-3453013f5285-kube-api-access-6rsx2\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.602100 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-run-ovn-kubernetes\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.602121 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-env-overrides\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.602140 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7e46bfdf-4891-4bd6-8c51-3453013f5285-system-cni-dir\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.602234 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-host-var-lib-cni-bin\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.612511 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.627143 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.651929 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.685323 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.700635 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.703807 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-ovnkube-config\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.703854 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-ovnkube-script-lib\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.703883 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-system-cni-dir\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.703903 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-hostroot\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.703925 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-systemd-units\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.703950 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-var-lib-openvswitch\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.703970 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.703992 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-ovn-node-metrics-cert\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704017 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7e46bfdf-4891-4bd6-8c51-3453013f5285-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704032 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-hostroot\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704067 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-run-ovn\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704035 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-run-ovn\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704095 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-systemd-units\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704105 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-node-log\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704120 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-var-lib-openvswitch\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704130 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-run-systemd\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704155 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-cnibin\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704183 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-host-var-lib-kubelet\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704214 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704227 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-log-socket\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704253 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-log-socket\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704362 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7e46bfdf-4891-4bd6-8c51-3453013f5285-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704384 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-cni-netd\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704397 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-node-log\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704422 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-host-var-lib-kubelet\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704424 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-system-cni-dir\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704392 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-cnibin\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704410 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-run-systemd\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704483 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-cni-netd\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704509 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7e46bfdf-4891-4bd6-8c51-3453013f5285-cni-binary-copy\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704539 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-multus-cni-dir\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704660 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs9fd\" (UniqueName: \"kubernetes.io/projected/4ee84379-3754-48c5-aaab-15dbc36caa16-kube-api-access-zs9fd\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704718 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xgmb\" (UniqueName: \"kubernetes.io/projected/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-kube-api-access-9xgmb\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704717 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-ovnkube-script-lib\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704745 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-multus-socket-dir-parent\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704757 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-ovnkube-config\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704794 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-host-run-netns\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704798 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-multus-socket-dir-parent\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704819 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-multus-cni-dir\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704827 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-multus-conf-dir\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704859 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-host-run-netns\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704878 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-etc-kubernetes\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704900 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-run-netns\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704918 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-host-run-multus-certs\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704946 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-kubelet\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704963 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4ee84379-3754-48c5-aaab-15dbc36caa16-cni-binary-copy\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704979 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-host-var-lib-cni-multus\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.704996 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4ee84379-3754-48c5-aaab-15dbc36caa16-multus-daemon-config\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705010 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-slash\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705030 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-cni-bin\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705051 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7e46bfdf-4891-4bd6-8c51-3453013f5285-os-release\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705068 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rsx2\" (UniqueName: \"kubernetes.io/projected/7e46bfdf-4891-4bd6-8c51-3453013f5285-kube-api-access-6rsx2\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705103 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-run-ovn-kubernetes\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705108 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-multus-conf-dir\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705103 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-kubelet\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705147 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-env-overrides\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705168 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7e46bfdf-4891-4bd6-8c51-3453013f5285-system-cni-dir\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705190 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-host-var-lib-cni-bin\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705213 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-run-openvswitch\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705234 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7e46bfdf-4891-4bd6-8c51-3453013f5285-cnibin\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705259 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7e46bfdf-4891-4bd6-8c51-3453013f5285-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705289 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-os-release\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705313 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-host-run-k8s-cni-cncf-io\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705346 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-etc-openvswitch\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705356 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7e46bfdf-4891-4bd6-8c51-3453013f5285-cni-binary-copy\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705399 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-etc-openvswitch\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705618 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-cni-bin\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705659 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-host-var-lib-cni-multus\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705710 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4ee84379-3754-48c5-aaab-15dbc36caa16-cni-binary-copy\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705769 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-slash\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705844 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-env-overrides\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705868 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7e46bfdf-4891-4bd6-8c51-3453013f5285-os-release\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705882 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7e46bfdf-4891-4bd6-8c51-3453013f5285-system-cni-dir\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705875 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-run-netns\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705912 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-host-run-multus-certs\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705926 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-run-openvswitch\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705934 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-run-ovn-kubernetes\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.706061 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-host-run-k8s-cni-cncf-io\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.706092 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-etc-kubernetes\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705844 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7e46bfdf-4891-4bd6-8c51-3453013f5285-cnibin\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.705893 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-host-var-lib-cni-bin\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.706128 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4ee84379-3754-48c5-aaab-15dbc36caa16-os-release\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.706259 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4ee84379-3754-48c5-aaab-15dbc36caa16-multus-daemon-config\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.706430 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7e46bfdf-4891-4bd6-8c51-3453013f5285-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.715559 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-ovn-node-metrics-cert\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.721052 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.729113 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xgmb\" (UniqueName: \"kubernetes.io/projected/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-kube-api-access-9xgmb\") pod \"ovnkube-node-8vnfr\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.733997 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rsx2\" (UniqueName: \"kubernetes.io/projected/7e46bfdf-4891-4bd6-8c51-3453013f5285-kube-api-access-6rsx2\") pod \"multus-additional-cni-plugins-cx2sr\" (UID: \"7e46bfdf-4891-4bd6-8c51-3453013f5285\") " pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.738185 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs9fd\" (UniqueName: \"kubernetes.io/projected/4ee84379-3754-48c5-aaab-15dbc36caa16-kube-api-access-zs9fd\") pod \"multus-kfz78\" (UID: \"4ee84379-3754-48c5-aaab-15dbc36caa16\") " pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.747280 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.763687 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.777792 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.778731 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.792154 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.792675 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.809573 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.845821 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.863061 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.867377 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.878050 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: W1128 11:53:21.880166 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44c9601c_cc85_4e79_aadd_8d20e2ea9f12.slice/crio-3e028ea9d3bf1d8a39325d8ffd5fb17e5d86435c2af3d682ae2b5dac6621ed9d WatchSource:0}: Error finding container 3e028ea9d3bf1d8a39325d8ffd5fb17e5d86435c2af3d682ae2b5dac6621ed9d: Status 404 returned error can't find the container with id 3e028ea9d3bf1d8a39325d8ffd5fb17e5d86435c2af3d682ae2b5dac6621ed9d Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.887951 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.890144 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.896370 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kfz78" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.909269 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: W1128 11:53:21.919592 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ee84379_3754_48c5_aaab_15dbc36caa16.slice/crio-d51543b337fabc088aa12fe30baa28a65807202b2c3714384f242b6e523e1cd8 WatchSource:0}: Error finding container d51543b337fabc088aa12fe30baa28a65807202b2c3714384f242b6e523e1cd8: Status 404 returned error can't find the container with id d51543b337fabc088aa12fe30baa28a65807202b2c3714384f242b6e523e1cd8 Nov 28 11:53:21 crc kubenswrapper[5030]: W1128 11:53:21.920491 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e46bfdf_4891_4bd6_8c51_3453013f5285.slice/crio-fbb8684b3ab824b13aace0ebb46e63add8cdfda67e40e9bf53b06dfe1fbd09b0 WatchSource:0}: Error finding container fbb8684b3ab824b13aace0ebb46e63add8cdfda67e40e9bf53b06dfe1fbd09b0: Status 404 returned error can't find the container with id fbb8684b3ab824b13aace0ebb46e63add8cdfda67e40e9bf53b06dfe1fbd09b0 Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.922817 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.929299 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:21 crc kubenswrapper[5030]: I1128 11:53:21.947623 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:21Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.008527 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:53:22 crc kubenswrapper[5030]: E1128 11:53:22.009005 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:53:24.008971399 +0000 UTC m=+21.950714082 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.020897 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.076769 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.110388 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.110441 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.110506 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.110534 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:22 crc kubenswrapper[5030]: E1128 11:53:22.110612 5030 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:53:22 crc kubenswrapper[5030]: E1128 11:53:22.110697 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:53:22 crc kubenswrapper[5030]: E1128 11:53:22.110717 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:53:22 crc kubenswrapper[5030]: E1128 11:53:22.110724 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:24.110695967 +0000 UTC m=+22.052438650 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:53:22 crc kubenswrapper[5030]: E1128 11:53:22.110731 5030 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:22 crc kubenswrapper[5030]: E1128 11:53:22.110789 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:24.110772859 +0000 UTC m=+22.052515542 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:22 crc kubenswrapper[5030]: E1128 11:53:22.110819 5030 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:53:22 crc kubenswrapper[5030]: E1128 11:53:22.110849 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:24.110842231 +0000 UTC m=+22.052584914 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:53:22 crc kubenswrapper[5030]: E1128 11:53:22.110850 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:53:22 crc kubenswrapper[5030]: E1128 11:53:22.110884 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:53:22 crc kubenswrapper[5030]: E1128 11:53:22.110892 5030 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:22 crc kubenswrapper[5030]: E1128 11:53:22.110919 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:24.110912153 +0000 UTC m=+22.052654836 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.112609 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.142312 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.164057 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.184995 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.214291 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.231719 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.258977 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.278128 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.304262 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.334600 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.361991 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.392207 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.392264 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.392219 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:22 crc kubenswrapper[5030]: E1128 11:53:22.392368 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:22 crc kubenswrapper[5030]: E1128 11:53:22.392531 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:22 crc kubenswrapper[5030]: E1128 11:53:22.392700 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.396895 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.397627 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.398311 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.398966 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.400631 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.401145 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.401765 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.402722 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.403338 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.404226 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.404750 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.405844 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.406346 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.407264 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.407790 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.408668 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.409227 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.409721 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.410667 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.411241 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.411658 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.412067 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.413046 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.413459 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.414544 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.414980 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.416098 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.417085 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.417576 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.418571 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.419044 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.420425 5030 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.420639 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.422554 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.423664 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.424123 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.425722 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.426731 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.427286 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.427914 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.429135 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.429414 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.429755 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.430962 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.431999 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.432700 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.433800 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.434840 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.435776 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.436515 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.437501 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.438001 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.438457 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.439501 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.440085 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.440996 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.442663 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.467402 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.489515 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.508995 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.521947 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.535456 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.553332 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.574219 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" event={"ID":"7e46bfdf-4891-4bd6-8c51-3453013f5285","Type":"ContainerStarted","Data":"fbb8684b3ab824b13aace0ebb46e63add8cdfda67e40e9bf53b06dfe1fbd09b0"} Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.575989 5030 generic.go:334] "Generic (PLEG): container finished" podID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerID="86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769" exitCode=0 Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.576104 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerDied","Data":"86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769"} Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.576166 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerStarted","Data":"3e028ea9d3bf1d8a39325d8ffd5fb17e5d86435c2af3d682ae2b5dac6621ed9d"} Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.578251 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.579212 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kfz78" event={"ID":"4ee84379-3754-48c5-aaab-15dbc36caa16","Type":"ContainerStarted","Data":"b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856"} Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.579250 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kfz78" event={"ID":"4ee84379-3754-48c5-aaab-15dbc36caa16","Type":"ContainerStarted","Data":"d51543b337fabc088aa12fe30baa28a65807202b2c3714384f242b6e523e1cd8"} Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.583603 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerStarted","Data":"251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6"} Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.583644 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerStarted","Data":"9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94"} Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.587080 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7w8nl" event={"ID":"cb9b76b5-26c0-4a17-a384-356a8b82fed4","Type":"ContainerStarted","Data":"964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145"} Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.593498 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: E1128 11:53:22.599007 5030 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.606286 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.634443 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.691435 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.717905 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.761322 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.803979 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.837696 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.889352 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.917689 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.956417 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:22 crc kubenswrapper[5030]: I1128 11:53:22.997305 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.046071 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.077927 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.124112 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.156276 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.199922 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.241593 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.450574 5030 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.453704 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.453773 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.453794 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.453979 5030 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.464102 5030 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.464650 5030 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.465936 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.465982 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.466000 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.466019 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.466040 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:23Z","lastTransitionTime":"2025-11-28T11:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:23 crc kubenswrapper[5030]: E1128 11:53:23.488436 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.493142 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.493197 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.493215 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.493240 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.493256 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:23Z","lastTransitionTime":"2025-11-28T11:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:23 crc kubenswrapper[5030]: E1128 11:53:23.505996 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.510635 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.510679 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.510699 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.510725 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.510744 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:23Z","lastTransitionTime":"2025-11-28T11:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:23 crc kubenswrapper[5030]: E1128 11:53:23.523547 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.528564 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.528646 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.528669 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.528698 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.528720 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:23Z","lastTransitionTime":"2025-11-28T11:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:23 crc kubenswrapper[5030]: E1128 11:53:23.548802 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.554368 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.554429 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.554440 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.554474 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.554488 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:23Z","lastTransitionTime":"2025-11-28T11:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:23 crc kubenswrapper[5030]: E1128 11:53:23.569964 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: E1128 11:53:23.570208 5030 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.572286 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.572341 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.572359 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.572391 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.572412 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:23Z","lastTransitionTime":"2025-11-28T11:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.593610 5030 generic.go:334] "Generic (PLEG): container finished" podID="7e46bfdf-4891-4bd6-8c51-3453013f5285" containerID="77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc" exitCode=0 Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.593735 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" event={"ID":"7e46bfdf-4891-4bd6-8c51-3453013f5285","Type":"ContainerDied","Data":"77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc"} Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.601040 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerStarted","Data":"50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857"} Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.601087 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerStarted","Data":"ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294"} Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.601102 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerStarted","Data":"f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1"} Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.601118 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerStarted","Data":"e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2"} Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.601133 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerStarted","Data":"fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb"} Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.601145 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerStarted","Data":"54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff"} Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.603205 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581"} Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.611386 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.643092 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.662107 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.674941 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.674987 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.674999 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.675016 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.675026 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:23Z","lastTransitionTime":"2025-11-28T11:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.682453 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.697762 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.712389 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.726802 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.742476 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.758826 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.773438 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.779296 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.779514 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.779531 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.779661 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.779678 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:23Z","lastTransitionTime":"2025-11-28T11:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.793572 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.808258 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.821431 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.855894 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.875130 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.882766 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.882801 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.882814 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.882837 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.882850 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:23Z","lastTransitionTime":"2025-11-28T11:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.920979 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.960944 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.985936 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.985978 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.985988 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.986003 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.986014 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:23Z","lastTransitionTime":"2025-11-28T11:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:23 crc kubenswrapper[5030]: I1128 11:53:23.997927 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:23Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.033904 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:53:24 crc kubenswrapper[5030]: E1128 11:53:24.034148 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:53:28.034107479 +0000 UTC m=+25.975850172 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.038107 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.080958 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.088612 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.088648 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.088660 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.088681 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.088694 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:24Z","lastTransitionTime":"2025-11-28T11:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.122617 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.134897 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.134947 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.134977 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.135005 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:24 crc kubenswrapper[5030]: E1128 11:53:24.135136 5030 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:53:24 crc kubenswrapper[5030]: E1128 11:53:24.135141 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:53:24 crc kubenswrapper[5030]: E1128 11:53:24.135165 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:53:24 crc kubenswrapper[5030]: E1128 11:53:24.135207 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:53:24 crc kubenswrapper[5030]: E1128 11:53:24.135222 5030 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:24 crc kubenswrapper[5030]: E1128 11:53:24.135242 5030 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:53:24 crc kubenswrapper[5030]: E1128 11:53:24.135176 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:53:24 crc kubenswrapper[5030]: E1128 11:53:24.135403 5030 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:24 crc kubenswrapper[5030]: E1128 11:53:24.135191 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:28.135175239 +0000 UTC m=+26.076917922 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:53:24 crc kubenswrapper[5030]: E1128 11:53:24.135524 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:28.135462727 +0000 UTC m=+26.077205600 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:24 crc kubenswrapper[5030]: E1128 11:53:24.135548 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:28.135536659 +0000 UTC m=+26.077279562 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:53:24 crc kubenswrapper[5030]: E1128 11:53:24.135567 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:28.135556549 +0000 UTC m=+26.077299472 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.164561 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.191375 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.191423 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.191441 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.191459 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.191491 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:24Z","lastTransitionTime":"2025-11-28T11:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.214202 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.236965 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.275313 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.294284 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.294319 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.294328 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.294343 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.294353 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:24Z","lastTransitionTime":"2025-11-28T11:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.316620 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.355744 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.392161 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.392307 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:24 crc kubenswrapper[5030]: E1128 11:53:24.392368 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.392425 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:24 crc kubenswrapper[5030]: E1128 11:53:24.392573 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:24 crc kubenswrapper[5030]: E1128 11:53:24.392637 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.395028 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.397382 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.397428 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.397448 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.397516 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.397573 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:24Z","lastTransitionTime":"2025-11-28T11:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.500929 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.501046 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.501070 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.501134 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.501157 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:24Z","lastTransitionTime":"2025-11-28T11:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.603658 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.603750 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.603770 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.603861 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.603882 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:24Z","lastTransitionTime":"2025-11-28T11:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.608440 5030 generic.go:334] "Generic (PLEG): container finished" podID="7e46bfdf-4891-4bd6-8c51-3453013f5285" containerID="0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1" exitCode=0 Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.608526 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" event={"ID":"7e46bfdf-4891-4bd6-8c51-3453013f5285","Type":"ContainerDied","Data":"0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1"} Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.629351 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.651496 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.670217 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.688344 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.707259 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.707299 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.707308 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.707327 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.707341 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:24Z","lastTransitionTime":"2025-11-28T11:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.708655 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.729188 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.754411 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.767601 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.786218 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.803679 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.810277 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.810335 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.810348 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.810368 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.810380 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:24Z","lastTransitionTime":"2025-11-28T11:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.836675 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.877339 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.912545 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.912603 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.912614 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.912635 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.912647 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:24Z","lastTransitionTime":"2025-11-28T11:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.914724 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:24 crc kubenswrapper[5030]: I1128 11:53:24.953592 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:24Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.015290 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.015334 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.015345 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.015364 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.015374 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:25Z","lastTransitionTime":"2025-11-28T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.118548 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.118599 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.118609 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.118626 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.118644 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:25Z","lastTransitionTime":"2025-11-28T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.221689 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.221763 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.221780 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.221808 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.221828 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:25Z","lastTransitionTime":"2025-11-28T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.325228 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.325281 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.325289 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.325305 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.325316 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:25Z","lastTransitionTime":"2025-11-28T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.427794 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.427852 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.427867 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.427885 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.427899 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:25Z","lastTransitionTime":"2025-11-28T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.531027 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.531110 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.531140 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.531183 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.531210 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:25Z","lastTransitionTime":"2025-11-28T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.614931 5030 generic.go:334] "Generic (PLEG): container finished" podID="7e46bfdf-4891-4bd6-8c51-3453013f5285" containerID="4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c" exitCode=0 Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.615061 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" event={"ID":"7e46bfdf-4891-4bd6-8c51-3453013f5285","Type":"ContainerDied","Data":"4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c"} Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.621935 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerStarted","Data":"7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7"} Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.635008 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:25Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.635587 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.635620 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.635632 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.635648 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.635662 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:25Z","lastTransitionTime":"2025-11-28T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.650625 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:25Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.664877 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:25Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.677935 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:25Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.693563 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:25Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.708222 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:25Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.724265 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:25Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.738593 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.738665 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.738679 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.738703 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.738724 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:25Z","lastTransitionTime":"2025-11-28T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.739329 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:25Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.770380 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:25Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.792548 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:25Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.820939 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:25Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.837560 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:25Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.844697 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.844757 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.844769 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.844791 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.844805 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:25Z","lastTransitionTime":"2025-11-28T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.856171 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:25Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.880429 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:25Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.946986 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.947037 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.947051 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.947072 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:25 crc kubenswrapper[5030]: I1128 11:53:25.947086 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:25Z","lastTransitionTime":"2025-11-28T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.050425 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.050509 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.050527 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.050552 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.050572 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:26Z","lastTransitionTime":"2025-11-28T11:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.153533 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.153596 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.153612 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.153635 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.153651 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:26Z","lastTransitionTime":"2025-11-28T11:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.256226 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.256304 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.256322 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.256349 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.256369 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:26Z","lastTransitionTime":"2025-11-28T11:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.359936 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.360010 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.360027 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.360052 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.360073 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:26Z","lastTransitionTime":"2025-11-28T11:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.392727 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.392727 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.392720 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:26 crc kubenswrapper[5030]: E1128 11:53:26.392945 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:26 crc kubenswrapper[5030]: E1128 11:53:26.393187 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:26 crc kubenswrapper[5030]: E1128 11:53:26.393368 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.463817 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.463877 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.463897 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.463933 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.463961 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:26Z","lastTransitionTime":"2025-11-28T11:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.567420 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.567519 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.567548 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.567575 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.567594 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:26Z","lastTransitionTime":"2025-11-28T11:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.631413 5030 generic.go:334] "Generic (PLEG): container finished" podID="7e46bfdf-4891-4bd6-8c51-3453013f5285" containerID="a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e" exitCode=0 Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.631511 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" event={"ID":"7e46bfdf-4891-4bd6-8c51-3453013f5285","Type":"ContainerDied","Data":"a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e"} Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.669512 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.672401 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.672432 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.672442 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.672484 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.672498 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:26Z","lastTransitionTime":"2025-11-28T11:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.689490 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.729158 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.771027 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.775020 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.775075 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.775090 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.775110 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.775124 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:26Z","lastTransitionTime":"2025-11-28T11:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.794308 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.807087 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.816935 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.828042 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.840236 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.855901 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.871524 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.877390 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.877441 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.877455 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.877507 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.877524 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:26Z","lastTransitionTime":"2025-11-28T11:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.887993 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.913059 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.930485 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:26Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.980287 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.980324 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.980333 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.980352 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:26 crc kubenswrapper[5030]: I1128 11:53:26.980362 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:26Z","lastTransitionTime":"2025-11-28T11:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.084054 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.084113 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.084134 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.084161 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.084183 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:27Z","lastTransitionTime":"2025-11-28T11:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.187925 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.188004 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.188023 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.188056 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.188077 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:27Z","lastTransitionTime":"2025-11-28T11:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.291407 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.291511 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.291531 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.291559 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.291577 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:27Z","lastTransitionTime":"2025-11-28T11:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.363700 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-42bsd"] Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.364240 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-42bsd" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.368966 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.369598 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.369782 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.371831 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.392662 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.395136 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.395175 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.395188 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.395207 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.395222 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:27Z","lastTransitionTime":"2025-11-28T11:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.415594 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.437675 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.474290 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.480489 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dgbc\" (UniqueName: \"kubernetes.io/projected/ecb0da03-4159-42f4-aa72-67c3cbbca4db-kube-api-access-6dgbc\") pod \"node-ca-42bsd\" (UID: \"ecb0da03-4159-42f4-aa72-67c3cbbca4db\") " pod="openshift-image-registry/node-ca-42bsd" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.480782 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecb0da03-4159-42f4-aa72-67c3cbbca4db-host\") pod \"node-ca-42bsd\" (UID: \"ecb0da03-4159-42f4-aa72-67c3cbbca4db\") " pod="openshift-image-registry/node-ca-42bsd" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.480862 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ecb0da03-4159-42f4-aa72-67c3cbbca4db-serviceca\") pod \"node-ca-42bsd\" (UID: \"ecb0da03-4159-42f4-aa72-67c3cbbca4db\") " pod="openshift-image-registry/node-ca-42bsd" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.488930 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.499780 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.499865 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.499883 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.499909 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.499925 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:27Z","lastTransitionTime":"2025-11-28T11:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.514637 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.535046 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.555773 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.574392 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.581946 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecb0da03-4159-42f4-aa72-67c3cbbca4db-host\") pod \"node-ca-42bsd\" (UID: \"ecb0da03-4159-42f4-aa72-67c3cbbca4db\") " pod="openshift-image-registry/node-ca-42bsd" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.582123 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ecb0da03-4159-42f4-aa72-67c3cbbca4db-serviceca\") pod \"node-ca-42bsd\" (UID: \"ecb0da03-4159-42f4-aa72-67c3cbbca4db\") " pod="openshift-image-registry/node-ca-42bsd" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.582236 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dgbc\" (UniqueName: \"kubernetes.io/projected/ecb0da03-4159-42f4-aa72-67c3cbbca4db-kube-api-access-6dgbc\") pod \"node-ca-42bsd\" (UID: \"ecb0da03-4159-42f4-aa72-67c3cbbca4db\") " pod="openshift-image-registry/node-ca-42bsd" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.582899 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecb0da03-4159-42f4-aa72-67c3cbbca4db-host\") pod \"node-ca-42bsd\" (UID: \"ecb0da03-4159-42f4-aa72-67c3cbbca4db\") " pod="openshift-image-registry/node-ca-42bsd" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.586288 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ecb0da03-4159-42f4-aa72-67c3cbbca4db-serviceca\") pod \"node-ca-42bsd\" (UID: \"ecb0da03-4159-42f4-aa72-67c3cbbca4db\") " pod="openshift-image-registry/node-ca-42bsd" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.603314 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.603581 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.603626 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.603657 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.603673 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:27Z","lastTransitionTime":"2025-11-28T11:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.609216 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dgbc\" (UniqueName: \"kubernetes.io/projected/ecb0da03-4159-42f4-aa72-67c3cbbca4db-kube-api-access-6dgbc\") pod \"node-ca-42bsd\" (UID: \"ecb0da03-4159-42f4-aa72-67c3cbbca4db\") " pod="openshift-image-registry/node-ca-42bsd" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.613647 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.627248 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.643991 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.646736 5030 generic.go:334] "Generic (PLEG): container finished" podID="7e46bfdf-4891-4bd6-8c51-3453013f5285" containerID="e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812" exitCode=0 Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.646778 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" event={"ID":"7e46bfdf-4891-4bd6-8c51-3453013f5285","Type":"ContainerDied","Data":"e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812"} Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.656726 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.669856 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.681593 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.689441 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-42bsd" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.695811 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.705746 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.705790 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.705801 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.705820 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.705832 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:27Z","lastTransitionTime":"2025-11-28T11:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.729587 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.750073 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.766426 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.785440 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.801853 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.809261 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.809299 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.809310 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.809331 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.809344 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:27Z","lastTransitionTime":"2025-11-28T11:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.817610 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.834866 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.849826 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.865098 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.885808 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.897575 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.911723 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.911797 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.911815 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.911839 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.911857 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:27Z","lastTransitionTime":"2025-11-28T11:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.917754 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.935264 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:27 crc kubenswrapper[5030]: I1128 11:53:27.950229 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:27Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.019336 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.019828 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.019842 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.019864 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.019888 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:28Z","lastTransitionTime":"2025-11-28T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.086991 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:53:28 crc kubenswrapper[5030]: E1128 11:53:28.087236 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:53:36.087201179 +0000 UTC m=+34.028943862 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.122443 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.122535 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.122549 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.122572 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.122590 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:28Z","lastTransitionTime":"2025-11-28T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.187569 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.187645 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.187671 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.187711 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:28 crc kubenswrapper[5030]: E1128 11:53:28.187882 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:53:28 crc kubenswrapper[5030]: E1128 11:53:28.187930 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:53:28 crc kubenswrapper[5030]: E1128 11:53:28.187946 5030 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:28 crc kubenswrapper[5030]: E1128 11:53:28.187933 5030 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:53:28 crc kubenswrapper[5030]: E1128 11:53:28.188012 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:36.187996688 +0000 UTC m=+34.129739371 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:28 crc kubenswrapper[5030]: E1128 11:53:28.188051 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:36.188023548 +0000 UTC m=+34.129766231 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:53:28 crc kubenswrapper[5030]: E1128 11:53:28.188109 5030 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:53:28 crc kubenswrapper[5030]: E1128 11:53:28.188172 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:36.188162302 +0000 UTC m=+34.129904985 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:53:28 crc kubenswrapper[5030]: E1128 11:53:28.188368 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:53:28 crc kubenswrapper[5030]: E1128 11:53:28.188404 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:53:28 crc kubenswrapper[5030]: E1128 11:53:28.188419 5030 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:28 crc kubenswrapper[5030]: E1128 11:53:28.188516 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:36.188492651 +0000 UTC m=+34.130235334 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.225488 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.225538 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.225547 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.225566 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.225577 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:28Z","lastTransitionTime":"2025-11-28T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.331061 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.331105 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.331118 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.331138 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.331150 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:28Z","lastTransitionTime":"2025-11-28T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.392586 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.392623 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.392746 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:28 crc kubenswrapper[5030]: E1128 11:53:28.392743 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:28 crc kubenswrapper[5030]: E1128 11:53:28.392861 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:28 crc kubenswrapper[5030]: E1128 11:53:28.392941 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.433546 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.433649 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.433669 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.433695 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.433714 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:28Z","lastTransitionTime":"2025-11-28T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.537183 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.537272 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.537293 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.537330 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.537352 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:28Z","lastTransitionTime":"2025-11-28T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.640420 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.640551 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.640575 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.640646 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.640668 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:28Z","lastTransitionTime":"2025-11-28T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.656726 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-42bsd" event={"ID":"ecb0da03-4159-42f4-aa72-67c3cbbca4db","Type":"ContainerStarted","Data":"fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef"} Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.656806 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-42bsd" event={"ID":"ecb0da03-4159-42f4-aa72-67c3cbbca4db","Type":"ContainerStarted","Data":"40f02b324982bb4abbb6b61a09d2d1b0ddf1ba0b7be146c6f2fd579abf04900c"} Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.662823 5030 generic.go:334] "Generic (PLEG): container finished" podID="7e46bfdf-4891-4bd6-8c51-3453013f5285" containerID="09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12" exitCode=0 Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.662945 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" event={"ID":"7e46bfdf-4891-4bd6-8c51-3453013f5285","Type":"ContainerDied","Data":"09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12"} Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.670264 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerStarted","Data":"fcc5142c27e4ba9ab65fafaf70b98206a6e9f1735e82c3fa6af79f3759aec751"} Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.670787 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.676395 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.692913 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.714550 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.734731 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.748674 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.748718 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.748732 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.748783 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.748798 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:28Z","lastTransitionTime":"2025-11-28T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.765875 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.782903 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.797734 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.814306 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.838927 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.852585 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.853097 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.853116 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.853143 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.853162 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:28Z","lastTransitionTime":"2025-11-28T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.858081 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.876918 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.892630 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.905328 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.928766 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.943054 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.955783 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.955842 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.955855 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.955876 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.955890 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:28Z","lastTransitionTime":"2025-11-28T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.972663 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5142c27e4ba9ab65fafaf70b98206a6e9f1735e82c3fa6af79f3759aec751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:28 crc kubenswrapper[5030]: I1128 11:53:28.987618 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.000434 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:28Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.014529 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.029772 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.047377 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.059199 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.059267 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.059293 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.059327 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.059352 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:29Z","lastTransitionTime":"2025-11-28T11:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.074822 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.091609 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.111839 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.133588 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.155284 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.162081 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.162122 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.162135 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.162154 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.162170 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:29Z","lastTransitionTime":"2025-11-28T11:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.174985 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.196118 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.212600 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.232599 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.265035 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.265094 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.265107 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.265129 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.265145 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:29Z","lastTransitionTime":"2025-11-28T11:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.367685 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.367743 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.367756 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.367786 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.367799 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:29Z","lastTransitionTime":"2025-11-28T11:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.470935 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.470997 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.471010 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.471033 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.471047 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:29Z","lastTransitionTime":"2025-11-28T11:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.574158 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.574217 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.574229 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.574247 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.574259 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:29Z","lastTransitionTime":"2025-11-28T11:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.682505 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.682676 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.682702 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.682737 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.682762 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:29Z","lastTransitionTime":"2025-11-28T11:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.689578 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" event={"ID":"7e46bfdf-4891-4bd6-8c51-3453013f5285","Type":"ContainerStarted","Data":"b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f"} Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.692001 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.692059 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.710902 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.731226 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.732876 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.733206 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.754822 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.773371 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.786401 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.786437 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.786450 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.786491 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.786507 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:29Z","lastTransitionTime":"2025-11-28T11:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.812415 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5142c27e4ba9ab65fafaf70b98206a6e9f1735e82c3fa6af79f3759aec751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.830227 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.866711 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.888635 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.888669 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.888679 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.888693 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.888704 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:29Z","lastTransitionTime":"2025-11-28T11:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.891304 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.907460 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.929854 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.955405 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.976084 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.990156 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:29Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.990890 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.990927 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.990935 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.990949 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:29 crc kubenswrapper[5030]: I1128 11:53:29.990958 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:29Z","lastTransitionTime":"2025-11-28T11:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.003546 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.019679 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.031612 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.068485 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.083606 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.093708 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.093753 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.093762 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.093779 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.093789 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:30Z","lastTransitionTime":"2025-11-28T11:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.098326 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.112075 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.127282 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.142257 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.154736 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.169178 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.185052 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.196938 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.196969 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.196983 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.197004 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.197018 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:30Z","lastTransitionTime":"2025-11-28T11:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.200428 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.212728 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.223232 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.250125 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5142c27e4ba9ab65fafaf70b98206a6e9f1735e82c3fa6af79f3759aec751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.262705 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:30Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.300121 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.300182 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.300200 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.300224 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.300242 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:30Z","lastTransitionTime":"2025-11-28T11:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.392979 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:30 crc kubenswrapper[5030]: E1128 11:53:30.393123 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.393208 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.393275 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:30 crc kubenswrapper[5030]: E1128 11:53:30.393462 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:30 crc kubenswrapper[5030]: E1128 11:53:30.393672 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.402703 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.402782 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.402799 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.402825 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.402840 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:30Z","lastTransitionTime":"2025-11-28T11:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.505947 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.506028 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.506054 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.506086 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.506109 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:30Z","lastTransitionTime":"2025-11-28T11:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.609512 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.609608 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.609625 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.609653 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.609676 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:30Z","lastTransitionTime":"2025-11-28T11:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.712735 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.712814 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.712839 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.712868 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.712896 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:30Z","lastTransitionTime":"2025-11-28T11:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.816113 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.816164 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.816174 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.816192 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.816213 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:30Z","lastTransitionTime":"2025-11-28T11:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.920809 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.920896 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.920919 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.920949 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:30 crc kubenswrapper[5030]: I1128 11:53:30.920971 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:30Z","lastTransitionTime":"2025-11-28T11:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.023565 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.023638 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.023652 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.023717 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.023730 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:31Z","lastTransitionTime":"2025-11-28T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.126308 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.126350 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.126361 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.126378 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.126390 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:31Z","lastTransitionTime":"2025-11-28T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.229048 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.229110 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.229128 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.229156 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.229174 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:31Z","lastTransitionTime":"2025-11-28T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.332556 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.332614 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.332631 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.332655 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.332674 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:31Z","lastTransitionTime":"2025-11-28T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.435754 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.435802 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.435820 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.435845 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.435863 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:31Z","lastTransitionTime":"2025-11-28T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.538995 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.539051 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.539069 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.539093 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.539110 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:31Z","lastTransitionTime":"2025-11-28T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.641714 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.641829 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.642183 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.642539 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.642823 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:31Z","lastTransitionTime":"2025-11-28T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.699158 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovnkube-controller/0.log" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.702346 5030 generic.go:334] "Generic (PLEG): container finished" podID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerID="fcc5142c27e4ba9ab65fafaf70b98206a6e9f1735e82c3fa6af79f3759aec751" exitCode=1 Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.702426 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerDied","Data":"fcc5142c27e4ba9ab65fafaf70b98206a6e9f1735e82c3fa6af79f3759aec751"} Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.703105 5030 scope.go:117] "RemoveContainer" containerID="fcc5142c27e4ba9ab65fafaf70b98206a6e9f1735e82c3fa6af79f3759aec751" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.721094 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.737533 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.746181 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.746348 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.746409 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.746519 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.746599 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:31Z","lastTransitionTime":"2025-11-28T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.754313 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.781093 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5142c27e4ba9ab65fafaf70b98206a6e9f1735e82c3fa6af79f3759aec751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc5142c27e4ba9ab65fafaf70b98206a6e9f1735e82c3fa6af79f3759aec751\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:31Z\\\",\\\"message\\\":\\\"11:53:30.989451 6335 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:53:30.990236 6335 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 11:53:30.990279 6335 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 11:53:30.990672 6335 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 11:53:30.990696 6335 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:53:30.990702 6335 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:53:30.990763 6335 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 11:53:30.990786 6335 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:53:30.990808 6335 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 11:53:30.990814 6335 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 11:53:30.990831 6335 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 11:53:30.990840 6335 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:53:30.990848 6335 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 11:53:30.990855 6335 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 11:53:30.990863 6335 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 11:53:30.991920 6335 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.794346 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.810332 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.841878 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.849781 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.849822 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.849831 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.849847 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.849858 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:31Z","lastTransitionTime":"2025-11-28T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.858138 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.877143 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.937212 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.952100 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.952826 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.952865 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.952944 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.952987 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.953000 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:31Z","lastTransitionTime":"2025-11-28T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.964690 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.978212 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:31 crc kubenswrapper[5030]: I1128 11:53:31.988862 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:31Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.004810 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.055695 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.055757 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.055776 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.055803 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.055821 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:32Z","lastTransitionTime":"2025-11-28T11:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.159263 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.159337 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.159354 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.159387 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.159404 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:32Z","lastTransitionTime":"2025-11-28T11:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.261981 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.262025 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.262034 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.262049 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.262059 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:32Z","lastTransitionTime":"2025-11-28T11:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.365184 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.365239 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.365250 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.365270 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.365285 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:32Z","lastTransitionTime":"2025-11-28T11:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.392559 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.392574 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:32 crc kubenswrapper[5030]: E1128 11:53:32.392734 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.392584 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:32 crc kubenswrapper[5030]: E1128 11:53:32.392851 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:32 crc kubenswrapper[5030]: E1128 11:53:32.392908 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.419053 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.437865 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.454139 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.466802 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.467372 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.467421 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.467437 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.467460 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.467504 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:32Z","lastTransitionTime":"2025-11-28T11:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.492484 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.508885 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.523442 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.541433 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.558235 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.570817 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.570855 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.570864 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.570880 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.570889 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:32Z","lastTransitionTime":"2025-11-28T11:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.575976 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.594769 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.615203 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.633026 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.659915 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc5142c27e4ba9ab65fafaf70b98206a6e9f1735e82c3fa6af79f3759aec751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc5142c27e4ba9ab65fafaf70b98206a6e9f1735e82c3fa6af79f3759aec751\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:31Z\\\",\\\"message\\\":\\\"11:53:30.989451 6335 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:53:30.990236 6335 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 11:53:30.990279 6335 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 11:53:30.990672 6335 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 11:53:30.990696 6335 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:53:30.990702 6335 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:53:30.990763 6335 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 11:53:30.990786 6335 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:53:30.990808 6335 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 11:53:30.990814 6335 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 11:53:30.990831 6335 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 11:53:30.990840 6335 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:53:30.990848 6335 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 11:53:30.990855 6335 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 11:53:30.990863 6335 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 11:53:30.991920 6335 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.673131 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.673193 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.673210 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.673234 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.673252 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:32Z","lastTransitionTime":"2025-11-28T11:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.673684 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.707497 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovnkube-controller/0.log" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.710422 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerStarted","Data":"4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab"} Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.710947 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.731908 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.744683 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.757533 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.774297 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.776144 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.776177 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.776189 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.776205 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.776216 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:32Z","lastTransitionTime":"2025-11-28T11:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.788917 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.801047 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.810978 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.824790 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.848208 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.863032 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.879386 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.879436 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.879453 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.879497 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.879516 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:32Z","lastTransitionTime":"2025-11-28T11:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.886120 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.904099 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.921503 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.942154 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc5142c27e4ba9ab65fafaf70b98206a6e9f1735e82c3fa6af79f3759aec751\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:31Z\\\",\\\"message\\\":\\\"11:53:30.989451 6335 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:53:30.990236 6335 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 11:53:30.990279 6335 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 11:53:30.990672 6335 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 11:53:30.990696 6335 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:53:30.990702 6335 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:53:30.990763 6335 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 11:53:30.990786 6335 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:53:30.990808 6335 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 11:53:30.990814 6335 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 11:53:30.990831 6335 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 11:53:30.990840 6335 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:53:30.990848 6335 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 11:53:30.990855 6335 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 11:53:30.990863 6335 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 11:53:30.991920 6335 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.960430 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.982384 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.982455 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.982515 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.982578 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:32 crc kubenswrapper[5030]: I1128 11:53:32.982598 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:32Z","lastTransitionTime":"2025-11-28T11:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.085921 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.085995 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.086019 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.086046 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.086063 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:33Z","lastTransitionTime":"2025-11-28T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.188522 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.188551 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.188559 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.188571 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.188581 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:33Z","lastTransitionTime":"2025-11-28T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.293006 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.293075 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.293097 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.293167 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.293205 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:33Z","lastTransitionTime":"2025-11-28T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.396994 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.397061 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.397082 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.397103 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.397120 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:33Z","lastTransitionTime":"2025-11-28T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.499985 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.500047 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.500067 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.500095 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.500113 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:33Z","lastTransitionTime":"2025-11-28T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.603451 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.603559 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.603579 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.603607 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.603631 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:33Z","lastTransitionTime":"2025-11-28T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.707606 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.707681 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.707700 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.707730 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.707751 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:33Z","lastTransitionTime":"2025-11-28T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.709539 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.709621 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.709644 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.709672 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.709693 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:33Z","lastTransitionTime":"2025-11-28T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.717165 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovnkube-controller/1.log" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.718177 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovnkube-controller/0.log" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.723283 5030 generic.go:334] "Generic (PLEG): container finished" podID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerID="4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab" exitCode=1 Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.723356 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerDied","Data":"4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab"} Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.723515 5030 scope.go:117] "RemoveContainer" containerID="fcc5142c27e4ba9ab65fafaf70b98206a6e9f1735e82c3fa6af79f3759aec751" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.724429 5030 scope.go:117] "RemoveContainer" containerID="4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab" Nov 28 11:53:33 crc kubenswrapper[5030]: E1128 11:53:33.724796 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" Nov 28 11:53:33 crc kubenswrapper[5030]: E1128 11:53:33.732080 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.738264 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.738611 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.738893 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.739156 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.739411 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:33Z","lastTransitionTime":"2025-11-28T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.746376 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:33 crc kubenswrapper[5030]: E1128 11:53:33.760905 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.766385 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.766436 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.766456 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.766507 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.766526 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:33Z","lastTransitionTime":"2025-11-28T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.768243 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.793019 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:33 crc kubenswrapper[5030]: E1128 11:53:33.794158 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.798729 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.798775 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.798792 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.798813 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.798829 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:33Z","lastTransitionTime":"2025-11-28T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.821059 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:33 crc kubenswrapper[5030]: E1128 11:53:33.831216 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.836697 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.837246 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.837276 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.837305 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.837327 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:33Z","lastTransitionTime":"2025-11-28T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.853288 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:33 crc kubenswrapper[5030]: E1128 11:53:33.858174 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:33 crc kubenswrapper[5030]: E1128 11:53:33.858428 5030 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.860722 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.860773 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.860800 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.860832 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.860855 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:33Z","lastTransitionTime":"2025-11-28T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.874225 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.892141 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.912533 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.934332 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.958358 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.964849 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.964892 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.964904 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.964924 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.964940 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:33Z","lastTransitionTime":"2025-11-28T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.981660 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:33 crc kubenswrapper[5030]: I1128 11:53:33.999169 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:33Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.016160 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.047970 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc5142c27e4ba9ab65fafaf70b98206a6e9f1735e82c3fa6af79f3759aec751\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:31Z\\\",\\\"message\\\":\\\"11:53:30.989451 6335 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:53:30.990236 6335 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 11:53:30.990279 6335 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 11:53:30.990672 6335 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 11:53:30.990696 6335 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:53:30.990702 6335 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:53:30.990763 6335 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 11:53:30.990786 6335 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:53:30.990808 6335 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 11:53:30.990814 6335 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 11:53:30.990831 6335 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 11:53:30.990840 6335 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:53:30.990848 6335 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 11:53:30.990855 6335 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 11:53:30.990863 6335 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 11:53:30.991920 6335 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"gressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:32.647139 6520 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:53:32.646781 6520 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.647281 6520 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:53:32.647309 6520 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 11:53:32.647327 6520 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:53:32.647343 6520 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 11:53:32.646229 6520 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.646817 6520 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.648133 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 11:53:32.648520 6520 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:53:32.648591 6520 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:53:32.648656 6520 factory.go:656] Stopping watch factory\\\\nI1128 11:53:32.648710 6520 ovnkube.go:599] Stopped ovnkube\\\\nI1128 11:53:32.648778 6520 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.066976 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.067848 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.067904 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.067921 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.067948 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.067966 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:34Z","lastTransitionTime":"2025-11-28T11:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.170906 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.170973 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.170991 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.171016 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.171033 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:34Z","lastTransitionTime":"2025-11-28T11:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.281836 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.282292 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.282770 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.283774 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.283851 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:34Z","lastTransitionTime":"2025-11-28T11:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.388253 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.388631 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.388833 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.388979 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.389121 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:34Z","lastTransitionTime":"2025-11-28T11:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.392339 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.392431 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.392339 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:34 crc kubenswrapper[5030]: E1128 11:53:34.392570 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:34 crc kubenswrapper[5030]: E1128 11:53:34.393053 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:34 crc kubenswrapper[5030]: E1128 11:53:34.393185 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.492617 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.492677 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.492694 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.492721 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.492740 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:34Z","lastTransitionTime":"2025-11-28T11:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.596189 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.596253 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.596274 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.596301 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.596320 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:34Z","lastTransitionTime":"2025-11-28T11:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.637813 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph"] Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.638738 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.642275 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.642358 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.661691 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.669315 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b5b961b1-b622-458f-b946-ba3b2c403918-env-overrides\") pod \"ovnkube-control-plane-749d76644c-25dph\" (UID: \"b5b961b1-b622-458f-b946-ba3b2c403918\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.669453 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b5b961b1-b622-458f-b946-ba3b2c403918-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-25dph\" (UID: \"b5b961b1-b622-458f-b946-ba3b2c403918\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.669603 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b5b961b1-b622-458f-b946-ba3b2c403918-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-25dph\" (UID: \"b5b961b1-b622-458f-b946-ba3b2c403918\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.669671 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl82d\" (UniqueName: \"kubernetes.io/projected/b5b961b1-b622-458f-b946-ba3b2c403918-kube-api-access-vl82d\") pod \"ovnkube-control-plane-749d76644c-25dph\" (UID: \"b5b961b1-b622-458f-b946-ba3b2c403918\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.698814 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.699650 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.699684 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.699696 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.699714 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.699727 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:34Z","lastTransitionTime":"2025-11-28T11:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.718762 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.729372 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovnkube-controller/1.log" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.735084 5030 scope.go:117] "RemoveContainer" containerID="4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab" Nov 28 11:53:34 crc kubenswrapper[5030]: E1128 11:53:34.735228 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.738626 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.765501 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.771231 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b5b961b1-b622-458f-b946-ba3b2c403918-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-25dph\" (UID: \"b5b961b1-b622-458f-b946-ba3b2c403918\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.771293 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b5b961b1-b622-458f-b946-ba3b2c403918-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-25dph\" (UID: \"b5b961b1-b622-458f-b946-ba3b2c403918\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.771326 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl82d\" (UniqueName: \"kubernetes.io/projected/b5b961b1-b622-458f-b946-ba3b2c403918-kube-api-access-vl82d\") pod \"ovnkube-control-plane-749d76644c-25dph\" (UID: \"b5b961b1-b622-458f-b946-ba3b2c403918\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.771398 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b5b961b1-b622-458f-b946-ba3b2c403918-env-overrides\") pod \"ovnkube-control-plane-749d76644c-25dph\" (UID: \"b5b961b1-b622-458f-b946-ba3b2c403918\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.773015 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b5b961b1-b622-458f-b946-ba3b2c403918-env-overrides\") pod \"ovnkube-control-plane-749d76644c-25dph\" (UID: \"b5b961b1-b622-458f-b946-ba3b2c403918\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.774518 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b5b961b1-b622-458f-b946-ba3b2c403918-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-25dph\" (UID: \"b5b961b1-b622-458f-b946-ba3b2c403918\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.778948 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b5b961b1-b622-458f-b946-ba3b2c403918-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-25dph\" (UID: \"b5b961b1-b622-458f-b946-ba3b2c403918\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.784510 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.791966 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl82d\" (UniqueName: \"kubernetes.io/projected/b5b961b1-b622-458f-b946-ba3b2c403918-kube-api-access-vl82d\") pod \"ovnkube-control-plane-749d76644c-25dph\" (UID: \"b5b961b1-b622-458f-b946-ba3b2c403918\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.800968 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.802132 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.802246 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.802323 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.802442 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.803244 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:34Z","lastTransitionTime":"2025-11-28T11:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.816101 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.828967 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.841785 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.872000 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc5142c27e4ba9ab65fafaf70b98206a6e9f1735e82c3fa6af79f3759aec751\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:31Z\\\",\\\"message\\\":\\\"11:53:30.989451 6335 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:53:30.990236 6335 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 11:53:30.990279 6335 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 11:53:30.990672 6335 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 11:53:30.990696 6335 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:53:30.990702 6335 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:53:30.990763 6335 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 11:53:30.990786 6335 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:53:30.990808 6335 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1128 11:53:30.990814 6335 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1128 11:53:30.990831 6335 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1128 11:53:30.990840 6335 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:53:30.990848 6335 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 11:53:30.990855 6335 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 11:53:30.990863 6335 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 11:53:30.991920 6335 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"gressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:32.647139 6520 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:53:32.646781 6520 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.647281 6520 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:53:32.647309 6520 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 11:53:32.647327 6520 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:53:32.647343 6520 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 11:53:32.646229 6520 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.646817 6520 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.648133 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 11:53:32.648520 6520 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:53:32.648591 6520 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:53:32.648656 6520 factory.go:656] Stopping watch factory\\\\nI1128 11:53:32.648710 6520 ovnkube.go:599] Stopped ovnkube\\\\nI1128 11:53:32.648778 6520 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.887409 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.906860 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.907286 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.907328 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.907342 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.907361 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.907377 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:34Z","lastTransitionTime":"2025-11-28T11:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.922448 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.943285 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.957756 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.964556 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:34 crc kubenswrapper[5030]: W1128 11:53:34.983517 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5b961b1_b622_458f_b946_ba3b2c403918.slice/crio-746560ff3fcda53d4bef6c201821f1e0c6dfdf6f9ed570dbe65977818c51a3bb WatchSource:0}: Error finding container 746560ff3fcda53d4bef6c201821f1e0c6dfdf6f9ed570dbe65977818c51a3bb: Status 404 returned error can't find the container with id 746560ff3fcda53d4bef6c201821f1e0c6dfdf6f9ed570dbe65977818c51a3bb Nov 28 11:53:34 crc kubenswrapper[5030]: I1128 11:53:34.987959 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:34Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.010359 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.010400 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.010414 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.010434 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.010449 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:35Z","lastTransitionTime":"2025-11-28T11:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.022826 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"gressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:32.647139 6520 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:53:32.646781 6520 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.647281 6520 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:53:32.647309 6520 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 11:53:32.647327 6520 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:53:32.647343 6520 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 11:53:32.646229 6520 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.646817 6520 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.648133 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 11:53:32.648520 6520 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:53:32.648591 6520 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:53:32.648656 6520 factory.go:656] Stopping watch factory\\\\nI1128 11:53:32.648710 6520 ovnkube.go:599] Stopped ovnkube\\\\nI1128 11:53:32.648778 6520 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.035619 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.053857 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.073559 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.090692 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.113789 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.114900 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.114954 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.114969 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.114992 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.115008 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:35Z","lastTransitionTime":"2025-11-28T11:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.128400 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.153941 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.171536 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.188419 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.203087 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.218731 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.219412 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.219460 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.219494 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.219513 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.219533 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:35Z","lastTransitionTime":"2025-11-28T11:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.237433 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.254498 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.266358 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.322761 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.322795 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.322803 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.322819 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.322829 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:35Z","lastTransitionTime":"2025-11-28T11:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.426058 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.426100 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.426110 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.426121 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.426130 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:35Z","lastTransitionTime":"2025-11-28T11:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.536876 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.536968 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.536996 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.537029 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.537067 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:35Z","lastTransitionTime":"2025-11-28T11:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.641413 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.641514 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.641532 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.641558 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.641576 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:35Z","lastTransitionTime":"2025-11-28T11:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.744898 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" event={"ID":"b5b961b1-b622-458f-b946-ba3b2c403918","Type":"ContainerStarted","Data":"746560ff3fcda53d4bef6c201821f1e0c6dfdf6f9ed570dbe65977818c51a3bb"} Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.745033 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.745125 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.745152 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.745194 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.745219 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:35Z","lastTransitionTime":"2025-11-28T11:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.763370 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-zg94c"] Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.764077 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:35 crc kubenswrapper[5030]: E1128 11:53:35.764177 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.788286 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.816192 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.830619 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.842657 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.848305 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.848360 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.848379 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.848402 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.848421 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:35Z","lastTransitionTime":"2025-11-28T11:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.862388 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.885876 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs\") pod \"network-metrics-daemon-zg94c\" (UID: \"a047de37-e5fb-49f1-8b34-94c084894e18\") " pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.886011 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zlt6\" (UniqueName: \"kubernetes.io/projected/a047de37-e5fb-49f1-8b34-94c084894e18-kube-api-access-9zlt6\") pod \"network-metrics-daemon-zg94c\" (UID: \"a047de37-e5fb-49f1-8b34-94c084894e18\") " pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.895412 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.918509 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.938865 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.950903 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.950933 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.950942 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.950958 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.950967 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:35Z","lastTransitionTime":"2025-11-28T11:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.957519 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.977579 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.987727 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zlt6\" (UniqueName: \"kubernetes.io/projected/a047de37-e5fb-49f1-8b34-94c084894e18-kube-api-access-9zlt6\") pod \"network-metrics-daemon-zg94c\" (UID: \"a047de37-e5fb-49f1-8b34-94c084894e18\") " pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.987871 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs\") pod \"network-metrics-daemon-zg94c\" (UID: \"a047de37-e5fb-49f1-8b34-94c084894e18\") " pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:35 crc kubenswrapper[5030]: E1128 11:53:35.988130 5030 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:53:35 crc kubenswrapper[5030]: E1128 11:53:35.988288 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs podName:a047de37-e5fb-49f1-8b34-94c084894e18 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:36.488243296 +0000 UTC m=+34.429986069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs") pod "network-metrics-daemon-zg94c" (UID: "a047de37-e5fb-49f1-8b34-94c084894e18") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:53:35 crc kubenswrapper[5030]: I1128 11:53:35.998421 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.015384 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.016978 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zlt6\" (UniqueName: \"kubernetes.io/projected/a047de37-e5fb-49f1-8b34-94c084894e18-kube-api-access-9zlt6\") pod \"network-metrics-daemon-zg94c\" (UID: \"a047de37-e5fb-49f1-8b34-94c084894e18\") " pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.036719 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.054856 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.054896 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.054908 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.054925 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.054936 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:36Z","lastTransitionTime":"2025-11-28T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.058985 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.087236 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"gressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:32.647139 6520 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:53:32.646781 6520 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.647281 6520 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:53:32.647309 6520 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 11:53:32.647327 6520 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:53:32.647343 6520 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 11:53:32.646229 6520 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.646817 6520 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.648133 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 11:53:32.648520 6520 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:53:32.648591 6520 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:53:32.648656 6520 factory.go:656] Stopping watch factory\\\\nI1128 11:53:32.648710 6520 ovnkube.go:599] Stopped ovnkube\\\\nI1128 11:53:32.648778 6520 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.088511 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:53:36 crc kubenswrapper[5030]: E1128 11:53:36.088882 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:53:52.088731025 +0000 UTC m=+50.030473748 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.101859 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.116423 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zg94c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a047de37-e5fb-49f1-8b34-94c084894e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zg94c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.135788 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.152877 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zg94c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a047de37-e5fb-49f1-8b34-94c084894e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zg94c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.158101 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.158142 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.158159 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.158184 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.158201 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:36Z","lastTransitionTime":"2025-11-28T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.175214 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.190215 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.190279 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.190353 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.190398 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:36 crc kubenswrapper[5030]: E1128 11:53:36.190614 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:53:36 crc kubenswrapper[5030]: E1128 11:53:36.190640 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:53:36 crc kubenswrapper[5030]: E1128 11:53:36.190660 5030 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:36 crc kubenswrapper[5030]: E1128 11:53:36.190782 5030 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:53:36 crc kubenswrapper[5030]: E1128 11:53:36.190842 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:53:36 crc kubenswrapper[5030]: E1128 11:53:36.190873 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:53:36 crc kubenswrapper[5030]: E1128 11:53:36.190885 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:52.190822878 +0000 UTC m=+50.132565601 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:36 crc kubenswrapper[5030]: E1128 11:53:36.190893 5030 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:36 crc kubenswrapper[5030]: E1128 11:53:36.190891 5030 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:53:36 crc kubenswrapper[5030]: E1128 11:53:36.190964 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:52.190944001 +0000 UTC m=+50.132686814 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:53:36 crc kubenswrapper[5030]: E1128 11:53:36.191202 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:52.191168688 +0000 UTC m=+50.132911411 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:36 crc kubenswrapper[5030]: E1128 11:53:36.191227 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:52.191214619 +0000 UTC m=+50.132957332 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.195103 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.213099 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.237578 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"gressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:32.647139 6520 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:53:32.646781 6520 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.647281 6520 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:53:32.647309 6520 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 11:53:32.647327 6520 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:53:32.647343 6520 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 11:53:32.646229 6520 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.646817 6520 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.648133 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 11:53:32.648520 6520 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:53:32.648591 6520 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:53:32.648656 6520 factory.go:656] Stopping watch factory\\\\nI1128 11:53:32.648710 6520 ovnkube.go:599] Stopped ovnkube\\\\nI1128 11:53:32.648778 6520 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.250597 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.262026 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.262086 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.262103 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.262125 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.262144 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:36Z","lastTransitionTime":"2025-11-28T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.270453 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.298840 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.321685 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.344151 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.369906 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.370224 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.370358 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.370524 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.370664 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:36Z","lastTransitionTime":"2025-11-28T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.376414 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.388720 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.392687 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.392785 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.392691 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:36 crc kubenswrapper[5030]: E1128 11:53:36.392863 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:36 crc kubenswrapper[5030]: E1128 11:53:36.393018 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:36 crc kubenswrapper[5030]: E1128 11:53:36.393209 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.405530 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.440204 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.473809 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.473859 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.473873 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.473893 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.473906 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:36Z","lastTransitionTime":"2025-11-28T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.481089 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.493871 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs\") pod \"network-metrics-daemon-zg94c\" (UID: \"a047de37-e5fb-49f1-8b34-94c084894e18\") " pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:36 crc kubenswrapper[5030]: E1128 11:53:36.494011 5030 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:53:36 crc kubenswrapper[5030]: E1128 11:53:36.494074 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs podName:a047de37-e5fb-49f1-8b34-94c084894e18 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:37.494055597 +0000 UTC m=+35.435798280 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs") pod "network-metrics-daemon-zg94c" (UID: "a047de37-e5fb-49f1-8b34-94c084894e18") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.501353 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.516140 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.576389 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.576442 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.576452 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.576490 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.576503 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:36Z","lastTransitionTime":"2025-11-28T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.679749 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.680311 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.680326 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.680347 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.680360 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:36Z","lastTransitionTime":"2025-11-28T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.751106 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" event={"ID":"b5b961b1-b622-458f-b946-ba3b2c403918","Type":"ContainerStarted","Data":"e4949e1c306f6dcea662ddb9fa5a17acb42cac5744c7c60c87eee9457a6793c9"} Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.751185 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" event={"ID":"b5b961b1-b622-458f-b946-ba3b2c403918","Type":"ContainerStarted","Data":"e157b8267fdc717cd296285288fb417fc468eab880eb1c4ed7a825434b5fc40d"} Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.767821 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.783533 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.783582 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.783595 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.783613 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.783631 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:36Z","lastTransitionTime":"2025-11-28T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.791244 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.805043 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.818873 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.875365 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.887747 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e157b8267fdc717cd296285288fb417fc468eab880eb1c4ed7a825434b5fc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4949e1c306f6dcea662ddb9fa5a17acb42cac5744c7c60c87eee9457a6793c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.889554 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.889593 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.889605 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.889624 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.889635 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:36Z","lastTransitionTime":"2025-11-28T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.903805 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.916444 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.929126 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.942438 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.955559 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.970384 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.987076 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.992114 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.992158 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.992168 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.992187 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:36 crc kubenswrapper[5030]: I1128 11:53:36.992199 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:36Z","lastTransitionTime":"2025-11-28T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.002169 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:36Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.023671 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"gressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:32.647139 6520 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:53:32.646781 6520 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.647281 6520 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:53:32.647309 6520 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 11:53:32.647327 6520 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:53:32.647343 6520 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 11:53:32.646229 6520 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.646817 6520 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.648133 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 11:53:32.648520 6520 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:53:32.648591 6520 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:53:32.648656 6520 factory.go:656] Stopping watch factory\\\\nI1128 11:53:32.648710 6520 ovnkube.go:599] Stopped ovnkube\\\\nI1128 11:53:32.648778 6520 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.036042 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.047791 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zg94c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a047de37-e5fb-49f1-8b34-94c084894e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zg94c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:37Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.095733 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.095786 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.095803 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.095828 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.095846 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:37Z","lastTransitionTime":"2025-11-28T11:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.198990 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.199049 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.199060 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.199084 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.199099 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:37Z","lastTransitionTime":"2025-11-28T11:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.302492 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.302568 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.302579 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.302619 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.302631 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:37Z","lastTransitionTime":"2025-11-28T11:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.392505 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:37 crc kubenswrapper[5030]: E1128 11:53:37.392696 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.405486 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.405537 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.405550 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.405569 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.405581 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:37Z","lastTransitionTime":"2025-11-28T11:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.505140 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs\") pod \"network-metrics-daemon-zg94c\" (UID: \"a047de37-e5fb-49f1-8b34-94c084894e18\") " pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:37 crc kubenswrapper[5030]: E1128 11:53:37.505319 5030 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:53:37 crc kubenswrapper[5030]: E1128 11:53:37.505394 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs podName:a047de37-e5fb-49f1-8b34-94c084894e18 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:39.50536863 +0000 UTC m=+37.447111313 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs") pod "network-metrics-daemon-zg94c" (UID: "a047de37-e5fb-49f1-8b34-94c084894e18") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.509818 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.509854 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.509866 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.509897 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.509911 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:37Z","lastTransitionTime":"2025-11-28T11:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.613017 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.613060 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.613071 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.613089 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.613101 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:37Z","lastTransitionTime":"2025-11-28T11:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.716061 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.716106 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.716118 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.716138 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.716149 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:37Z","lastTransitionTime":"2025-11-28T11:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.820461 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.820555 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.820573 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.820600 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.820623 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:37Z","lastTransitionTime":"2025-11-28T11:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.925512 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.925574 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.925591 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.925619 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:37 crc kubenswrapper[5030]: I1128 11:53:37.925638 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:37Z","lastTransitionTime":"2025-11-28T11:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.028729 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.028776 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.028789 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.028810 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.028826 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:38Z","lastTransitionTime":"2025-11-28T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.131998 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.132105 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.132117 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.132135 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.132148 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:38Z","lastTransitionTime":"2025-11-28T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.235262 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.235318 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.235331 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.235353 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.235366 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:38Z","lastTransitionTime":"2025-11-28T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.339614 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.339726 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.339748 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.339780 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.339808 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:38Z","lastTransitionTime":"2025-11-28T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.392712 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:38 crc kubenswrapper[5030]: E1128 11:53:38.392894 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.393432 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:38 crc kubenswrapper[5030]: E1128 11:53:38.393540 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.393692 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:38 crc kubenswrapper[5030]: E1128 11:53:38.393772 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.441897 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.441933 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.441941 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.441955 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.441964 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:38Z","lastTransitionTime":"2025-11-28T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.544723 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.544770 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.544779 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.544796 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.544805 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:38Z","lastTransitionTime":"2025-11-28T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.647514 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.647576 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.647594 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.647618 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.647638 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:38Z","lastTransitionTime":"2025-11-28T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.750927 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.750996 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.751016 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.751046 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.751085 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:38Z","lastTransitionTime":"2025-11-28T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.855003 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.855035 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.855049 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.855066 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.855077 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:38Z","lastTransitionTime":"2025-11-28T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.957747 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.957830 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.957853 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.957890 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:38 crc kubenswrapper[5030]: I1128 11:53:38.957916 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:38Z","lastTransitionTime":"2025-11-28T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.061865 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.061929 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.061949 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.061979 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.061997 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:39Z","lastTransitionTime":"2025-11-28T11:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.164968 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.165029 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.165047 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.165070 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.165089 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:39Z","lastTransitionTime":"2025-11-28T11:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.270089 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.270163 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.270185 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.270214 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.270233 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:39Z","lastTransitionTime":"2025-11-28T11:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.392645 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:39 crc kubenswrapper[5030]: E1128 11:53:39.392923 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.396561 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.396608 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.396623 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.396642 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.396659 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:39Z","lastTransitionTime":"2025-11-28T11:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.500716 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.500773 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.500795 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.500821 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.500842 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:39Z","lastTransitionTime":"2025-11-28T11:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.531313 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs\") pod \"network-metrics-daemon-zg94c\" (UID: \"a047de37-e5fb-49f1-8b34-94c084894e18\") " pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:39 crc kubenswrapper[5030]: E1128 11:53:39.531447 5030 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:53:39 crc kubenswrapper[5030]: E1128 11:53:39.531553 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs podName:a047de37-e5fb-49f1-8b34-94c084894e18 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:43.531535872 +0000 UTC m=+41.473278555 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs") pod "network-metrics-daemon-zg94c" (UID: "a047de37-e5fb-49f1-8b34-94c084894e18") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.604315 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.604408 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.604432 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.604525 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.604553 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:39Z","lastTransitionTime":"2025-11-28T11:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.707630 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.707702 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.707725 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.707756 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.707776 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:39Z","lastTransitionTime":"2025-11-28T11:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.811329 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.811398 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.811410 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.811434 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.811448 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:39Z","lastTransitionTime":"2025-11-28T11:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.914505 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.914553 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.914564 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.914581 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:39 crc kubenswrapper[5030]: I1128 11:53:39.914592 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:39Z","lastTransitionTime":"2025-11-28T11:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.017834 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.017914 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.017932 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.017958 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.017981 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:40Z","lastTransitionTime":"2025-11-28T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.121698 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.121755 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.121780 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.121811 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.121834 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:40Z","lastTransitionTime":"2025-11-28T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.224539 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.224623 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.224646 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.224690 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.224717 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:40Z","lastTransitionTime":"2025-11-28T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.328552 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.328599 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.328617 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.328640 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.328657 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:40Z","lastTransitionTime":"2025-11-28T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.393065 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.393068 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:40 crc kubenswrapper[5030]: E1128 11:53:40.393277 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.393098 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:40 crc kubenswrapper[5030]: E1128 11:53:40.393562 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:40 crc kubenswrapper[5030]: E1128 11:53:40.393601 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.431194 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.431263 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.431282 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.431315 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.431334 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:40Z","lastTransitionTime":"2025-11-28T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.535402 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.535511 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.535537 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.535567 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.535592 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:40Z","lastTransitionTime":"2025-11-28T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.638595 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.638674 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.638691 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.638714 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.638732 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:40Z","lastTransitionTime":"2025-11-28T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.741812 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.741845 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.741855 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.741870 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.741893 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:40Z","lastTransitionTime":"2025-11-28T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.856818 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.856860 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.856871 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.856918 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.856932 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:40Z","lastTransitionTime":"2025-11-28T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.959991 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.960061 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.960080 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.960105 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:40 crc kubenswrapper[5030]: I1128 11:53:40.960125 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:40Z","lastTransitionTime":"2025-11-28T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.063906 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.063983 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.064013 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.064046 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.064070 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:41Z","lastTransitionTime":"2025-11-28T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.167520 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.167776 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.167799 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.167822 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.167842 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:41Z","lastTransitionTime":"2025-11-28T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.271620 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.271716 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.271739 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.271765 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.271782 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:41Z","lastTransitionTime":"2025-11-28T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.375454 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.375548 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.375567 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.375593 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.375611 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:41Z","lastTransitionTime":"2025-11-28T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.392403 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:41 crc kubenswrapper[5030]: E1128 11:53:41.392736 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.478304 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.478340 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.478349 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.478364 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.478373 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:41Z","lastTransitionTime":"2025-11-28T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.582073 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.582166 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.582187 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.582215 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.582233 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:41Z","lastTransitionTime":"2025-11-28T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.686752 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.686865 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.686890 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.686918 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.686938 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:41Z","lastTransitionTime":"2025-11-28T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.790945 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.791008 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.791026 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.791053 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.791074 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:41Z","lastTransitionTime":"2025-11-28T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.895050 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.895143 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.895170 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.895203 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.895228 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:41Z","lastTransitionTime":"2025-11-28T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.998735 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.998808 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.998829 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.998856 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:41 crc kubenswrapper[5030]: I1128 11:53:41.998876 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:41Z","lastTransitionTime":"2025-11-28T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.103082 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.103152 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.103175 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.103319 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.103373 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:42Z","lastTransitionTime":"2025-11-28T11:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.208010 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.208070 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.208092 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.208125 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.208149 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:42Z","lastTransitionTime":"2025-11-28T11:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.311150 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.311203 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.311217 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.311236 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.311249 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:42Z","lastTransitionTime":"2025-11-28T11:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.392883 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.392980 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.392899 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:42 crc kubenswrapper[5030]: E1128 11:53:42.393115 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:42 crc kubenswrapper[5030]: E1128 11:53:42.393248 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:42 crc kubenswrapper[5030]: E1128 11:53:42.393446 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.413658 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.415198 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.415280 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.415303 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.415335 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.415357 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:42Z","lastTransitionTime":"2025-11-28T11:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.436120 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.453784 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.474268 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.497731 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.520912 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.520974 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.520993 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.521017 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.521035 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:42Z","lastTransitionTime":"2025-11-28T11:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.521912 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.541693 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.566885 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.601518 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"gressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:32.647139 6520 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:53:32.646781 6520 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.647281 6520 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:53:32.647309 6520 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 11:53:32.647327 6520 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:53:32.647343 6520 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 11:53:32.646229 6520 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.646817 6520 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.648133 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 11:53:32.648520 6520 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:53:32.648591 6520 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:53:32.648656 6520 factory.go:656] Stopping watch factory\\\\nI1128 11:53:32.648710 6520 ovnkube.go:599] Stopped ovnkube\\\\nI1128 11:53:32.648778 6520 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.624563 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.624889 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.625038 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.624662 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.625185 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.625331 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:42Z","lastTransitionTime":"2025-11-28T11:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.646682 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zg94c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a047de37-e5fb-49f1-8b34-94c084894e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zg94c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.669146 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.704772 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.727652 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.729304 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.729368 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.729394 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.729422 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.729441 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:42Z","lastTransitionTime":"2025-11-28T11:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.751991 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.776584 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.795908 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e157b8267fdc717cd296285288fb417fc468eab880eb1c4ed7a825434b5fc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4949e1c306f6dcea662ddb9fa5a17acb42cac5744c7c60c87eee9457a6793c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:42Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.832602 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.832658 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.832676 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.832698 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.832712 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:42Z","lastTransitionTime":"2025-11-28T11:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.936653 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.936717 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.936729 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.936753 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:42 crc kubenswrapper[5030]: I1128 11:53:42.936767 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:42Z","lastTransitionTime":"2025-11-28T11:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.040603 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.040674 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.040692 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.040721 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.040743 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:43Z","lastTransitionTime":"2025-11-28T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.144088 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.144160 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.144183 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.144215 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.144234 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:43Z","lastTransitionTime":"2025-11-28T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.247717 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.247772 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.247785 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.247811 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.247831 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:43Z","lastTransitionTime":"2025-11-28T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.350985 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.351420 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.351619 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.351779 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.351907 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:43Z","lastTransitionTime":"2025-11-28T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.392725 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:43 crc kubenswrapper[5030]: E1128 11:53:43.393365 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.455611 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.455679 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.455700 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.456066 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.456094 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:43Z","lastTransitionTime":"2025-11-28T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.559268 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.559708 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.559906 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.560056 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.560185 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:43Z","lastTransitionTime":"2025-11-28T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.576224 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs\") pod \"network-metrics-daemon-zg94c\" (UID: \"a047de37-e5fb-49f1-8b34-94c084894e18\") " pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:43 crc kubenswrapper[5030]: E1128 11:53:43.577186 5030 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:53:43 crc kubenswrapper[5030]: E1128 11:53:43.577297 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs podName:a047de37-e5fb-49f1-8b34-94c084894e18 nodeName:}" failed. No retries permitted until 2025-11-28 11:53:51.57726852 +0000 UTC m=+49.519011233 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs") pod "network-metrics-daemon-zg94c" (UID: "a047de37-e5fb-49f1-8b34-94c084894e18") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.664813 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.664869 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.664881 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.664904 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.664917 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:43Z","lastTransitionTime":"2025-11-28T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.768753 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.768835 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.768860 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.768894 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.768913 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:43Z","lastTransitionTime":"2025-11-28T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.872842 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.872905 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.872923 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.872947 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.872965 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:43Z","lastTransitionTime":"2025-11-28T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.976801 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.976874 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.976888 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.976911 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:43 crc kubenswrapper[5030]: I1128 11:53:43.976928 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:43Z","lastTransitionTime":"2025-11-28T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.038315 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.038375 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.038392 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.038420 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.038440 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:44Z","lastTransitionTime":"2025-11-28T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:44 crc kubenswrapper[5030]: E1128 11:53:44.057238 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:44Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.062335 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.062395 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.062419 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.062450 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.062510 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:44Z","lastTransitionTime":"2025-11-28T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:44 crc kubenswrapper[5030]: E1128 11:53:44.085052 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:44Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.090039 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.090079 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.090091 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.090108 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.090117 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:44Z","lastTransitionTime":"2025-11-28T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:44 crc kubenswrapper[5030]: E1128 11:53:44.110434 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:44Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.116135 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.116193 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.116204 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.116226 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.116239 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:44Z","lastTransitionTime":"2025-11-28T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:44 crc kubenswrapper[5030]: E1128 11:53:44.139565 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:44Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.145481 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.145533 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.145549 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.145569 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.145581 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:44Z","lastTransitionTime":"2025-11-28T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:44 crc kubenswrapper[5030]: E1128 11:53:44.166884 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:44Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:44 crc kubenswrapper[5030]: E1128 11:53:44.167066 5030 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.169133 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.169180 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.169191 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.169211 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.169224 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:44Z","lastTransitionTime":"2025-11-28T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.272669 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.272715 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.272726 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.272746 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.272758 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:44Z","lastTransitionTime":"2025-11-28T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.375651 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.375696 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.375706 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.375724 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.375737 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:44Z","lastTransitionTime":"2025-11-28T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.392661 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.392656 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.392822 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:44 crc kubenswrapper[5030]: E1128 11:53:44.393014 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:44 crc kubenswrapper[5030]: E1128 11:53:44.393419 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:44 crc kubenswrapper[5030]: E1128 11:53:44.393554 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.479883 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.480376 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.480587 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.480876 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.481191 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:44Z","lastTransitionTime":"2025-11-28T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.585433 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.585575 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.585594 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.585622 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.585639 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:44Z","lastTransitionTime":"2025-11-28T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.688740 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.689183 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.690110 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.691025 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.691271 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:44Z","lastTransitionTime":"2025-11-28T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.794668 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.795535 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.795854 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.796132 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.796350 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:44Z","lastTransitionTime":"2025-11-28T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.903888 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.904380 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.904582 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.904755 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:44 crc kubenswrapper[5030]: I1128 11:53:44.904905 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:44Z","lastTransitionTime":"2025-11-28T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.008808 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.008877 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.008897 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.008925 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.008945 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:45Z","lastTransitionTime":"2025-11-28T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.112299 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.113269 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.113428 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.113597 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.113745 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:45Z","lastTransitionTime":"2025-11-28T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.217031 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.217507 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.217899 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.218037 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.218155 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:45Z","lastTransitionTime":"2025-11-28T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.321836 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.322403 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.322784 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.322960 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.323123 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:45Z","lastTransitionTime":"2025-11-28T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.392930 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:45 crc kubenswrapper[5030]: E1128 11:53:45.393563 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.426903 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.426954 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.426970 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.426995 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.427102 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:45Z","lastTransitionTime":"2025-11-28T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.531140 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.531637 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.532146 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.532370 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.532596 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:45Z","lastTransitionTime":"2025-11-28T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.636426 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.636875 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.637018 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.637170 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.637304 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:45Z","lastTransitionTime":"2025-11-28T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.740851 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.740890 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.740904 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.740921 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.740935 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:45Z","lastTransitionTime":"2025-11-28T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.843917 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.843977 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.843994 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.844021 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.844041 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:45Z","lastTransitionTime":"2025-11-28T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.947793 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.947854 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.947873 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.947900 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:45 crc kubenswrapper[5030]: I1128 11:53:45.947919 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:45Z","lastTransitionTime":"2025-11-28T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.051671 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.051728 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.051747 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.051778 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.051802 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:46Z","lastTransitionTime":"2025-11-28T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.154813 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.154872 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.154891 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.154915 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.154954 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:46Z","lastTransitionTime":"2025-11-28T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.259211 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.259836 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.259994 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.260163 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.260356 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:46Z","lastTransitionTime":"2025-11-28T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.364148 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.364213 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.364230 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.364257 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.364275 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:46Z","lastTransitionTime":"2025-11-28T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.392401 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.392513 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.392938 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:46 crc kubenswrapper[5030]: E1128 11:53:46.392913 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:46 crc kubenswrapper[5030]: E1128 11:53:46.393151 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:46 crc kubenswrapper[5030]: E1128 11:53:46.393300 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.394531 5030 scope.go:117] "RemoveContainer" containerID="4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.468055 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.468103 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.468115 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.468136 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.468151 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:46Z","lastTransitionTime":"2025-11-28T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.573060 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.573588 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.573603 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.573624 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.573640 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:46Z","lastTransitionTime":"2025-11-28T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.676713 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.676757 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.676769 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.676791 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.676804 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:46Z","lastTransitionTime":"2025-11-28T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.779684 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.779728 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.779740 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.779757 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.779767 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:46Z","lastTransitionTime":"2025-11-28T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.797081 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovnkube-controller/1.log" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.801977 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerStarted","Data":"14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10"} Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.807698 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.845600 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.866896 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.883243 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.883316 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.883337 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.883365 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.883382 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:46Z","lastTransitionTime":"2025-11-28T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.899313 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.933572 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.951214 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e157b8267fdc717cd296285288fb417fc468eab880eb1c4ed7a825434b5fc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4949e1c306f6dcea662ddb9fa5a17acb42cac5744c7c60c87eee9457a6793c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.967818 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.985009 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.986238 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.986299 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.986314 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.986335 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.986347 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:46Z","lastTransitionTime":"2025-11-28T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:46 crc kubenswrapper[5030]: I1128 11:53:46.998819 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:46Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.009093 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:47Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.024042 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:47Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.036254 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zg94c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a047de37-e5fb-49f1-8b34-94c084894e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zg94c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:47Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.055504 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:47Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.083782 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:47Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.089412 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.089502 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.089522 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.089547 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.089565 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:47Z","lastTransitionTime":"2025-11-28T11:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.104014 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:47Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.140596 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"gressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:32.647139 6520 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:53:32.646781 6520 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.647281 6520 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:53:32.647309 6520 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 11:53:32.647327 6520 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:53:32.647343 6520 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 11:53:32.646229 6520 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.646817 6520 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.648133 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 11:53:32.648520 6520 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:53:32.648591 6520 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:53:32.648656 6520 factory.go:656] Stopping watch factory\\\\nI1128 11:53:32.648710 6520 ovnkube.go:599] Stopped ovnkube\\\\nI1128 11:53:32.648778 6520 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:47Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.159378 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:47Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.180791 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:47Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.193049 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.193110 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.193123 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.193146 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.193160 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:47Z","lastTransitionTime":"2025-11-28T11:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.297041 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.297104 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.297122 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.297150 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.297171 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:47Z","lastTransitionTime":"2025-11-28T11:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.392432 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:47 crc kubenswrapper[5030]: E1128 11:53:47.392666 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.400722 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.400767 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.400781 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.400806 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.400822 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:47Z","lastTransitionTime":"2025-11-28T11:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.504437 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.504539 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.504558 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.504587 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.504606 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:47Z","lastTransitionTime":"2025-11-28T11:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.608753 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.608835 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.608854 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.608886 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.608911 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:47Z","lastTransitionTime":"2025-11-28T11:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.712601 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.712676 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.712702 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.712732 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.712751 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:47Z","lastTransitionTime":"2025-11-28T11:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.809818 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovnkube-controller/2.log" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.810977 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovnkube-controller/1.log" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.815427 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.815518 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.815542 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.815573 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.815594 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:47Z","lastTransitionTime":"2025-11-28T11:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.816753 5030 generic.go:334] "Generic (PLEG): container finished" podID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerID="14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10" exitCode=1 Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.816850 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerDied","Data":"14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10"} Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.816936 5030 scope.go:117] "RemoveContainer" containerID="4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.817982 5030 scope.go:117] "RemoveContainer" containerID="14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10" Nov 28 11:53:47 crc kubenswrapper[5030]: E1128 11:53:47.818283 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.848285 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:47Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.871957 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:47Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.897426 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:47Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.917643 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e157b8267fdc717cd296285288fb417fc468eab880eb1c4ed7a825434b5fc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4949e1c306f6dcea662ddb9fa5a17acb42cac5744c7c60c87eee9457a6793c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:47Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.919283 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.919353 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.919375 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.919409 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.919431 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:47Z","lastTransitionTime":"2025-11-28T11:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.946722 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:47Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.965154 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:47Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.979414 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:47Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:47 crc kubenswrapper[5030]: I1128 11:53:47.996171 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:47Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.012395 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.022293 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.022382 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.022407 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.022438 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.022461 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:48Z","lastTransitionTime":"2025-11-28T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.030757 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.051954 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.081175 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.095811 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.125881 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.125947 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.125965 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.125990 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.126007 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:48Z","lastTransitionTime":"2025-11-28T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.129517 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"gressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:32.647139 6520 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:53:32.646781 6520 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.647281 6520 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:53:32.647309 6520 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 11:53:32.647327 6520 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:53:32.647343 6520 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 11:53:32.646229 6520 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.646817 6520 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.648133 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 11:53:32.648520 6520 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:53:32.648591 6520 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:53:32.648656 6520 factory.go:656] Stopping watch factory\\\\nI1128 11:53:32.648710 6520 ovnkube.go:599] Stopped ovnkube\\\\nI1128 11:53:32.648778 6520 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:47Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.401965 6719 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.401993 6719 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403130 6719 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403424 6719 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403845 6719 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 11:53:47.404102 6719 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.405032 6719 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.405169 6719 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.142180 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.156729 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zg94c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a047de37-e5fb-49f1-8b34-94c084894e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zg94c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.177578 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.228853 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.228938 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.228965 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.228997 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.229019 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:48Z","lastTransitionTime":"2025-11-28T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.333163 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.333233 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.333253 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.333284 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.333304 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:48Z","lastTransitionTime":"2025-11-28T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.372719 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.384580 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.392555 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.392571 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:48 crc kubenswrapper[5030]: E1128 11:53:48.392797 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.392599 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:48 crc kubenswrapper[5030]: E1128 11:53:48.393063 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:48 crc kubenswrapper[5030]: E1128 11:53:48.393237 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.397602 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.431841 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.435941 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.436021 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.436037 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.436057 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.436071 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:48Z","lastTransitionTime":"2025-11-28T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.451816 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.471378 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.496717 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.515730 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e157b8267fdc717cd296285288fb417fc468eab880eb1c4ed7a825434b5fc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4949e1c306f6dcea662ddb9fa5a17acb42cac5744c7c60c87eee9457a6793c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.530492 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.540765 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.540825 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.540839 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.540866 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.540885 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:48Z","lastTransitionTime":"2025-11-28T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.546844 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.563181 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.576190 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.592804 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.610655 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zg94c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a047de37-e5fb-49f1-8b34-94c084894e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zg94c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.627393 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.643202 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.643845 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.643935 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.643958 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.643996 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.644027 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:48Z","lastTransitionTime":"2025-11-28T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.659433 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.682846 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca305c3bdb3de56232f29ad0f7a43b513415dc4b3a5cbc19b5099b2738da9ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:33Z\\\",\\\"message\\\":\\\"gressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:32.647139 6520 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 11:53:32.646781 6520 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.647281 6520 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 11:53:32.647309 6520 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 11:53:32.647327 6520 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 11:53:32.647343 6520 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1128 11:53:32.646229 6520 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.646817 6520 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:32.648133 6520 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 11:53:32.648520 6520 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:53:32.648591 6520 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:53:32.648656 6520 factory.go:656] Stopping watch factory\\\\nI1128 11:53:32.648710 6520 ovnkube.go:599] Stopped ovnkube\\\\nI1128 11:53:32.648778 6520 handler.go:208] Removed *v1.Node event handler 2\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:47Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.401965 6719 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.401993 6719 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403130 6719 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403424 6719 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403845 6719 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 11:53:47.404102 6719 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.405032 6719 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.405169 6719 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.697445 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.747042 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.747105 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.747123 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.747151 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.747170 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:48Z","lastTransitionTime":"2025-11-28T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.823961 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovnkube-controller/2.log" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.830275 5030 scope.go:117] "RemoveContainer" containerID="14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10" Nov 28 11:53:48 crc kubenswrapper[5030]: E1128 11:53:48.830594 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.850623 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.850690 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.850717 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.850747 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.850771 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:48Z","lastTransitionTime":"2025-11-28T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.853927 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.872099 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ee8a59-861f-45a9-899b-a14b271beeec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4941837db92a86711049d8127c0c54d85666d4657fd632275b753d6cf824402a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e0ee0c11239d02d532be8f53740151a5473ce01cfeff9bfd74d14fd2f23e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://115d1d02ee85fac531c03ead7408d14eee3d97a5ded22b9c667d533ab91d5a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.908280 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.929875 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.953112 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.953128 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.953183 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.953204 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.953231 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.953250 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:48Z","lastTransitionTime":"2025-11-28T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.974815 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:48 crc kubenswrapper[5030]: I1128 11:53:48.992081 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e157b8267fdc717cd296285288fb417fc468eab880eb1c4ed7a825434b5fc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4949e1c306f6dcea662ddb9fa5a17acb42cac5744c7c60c87eee9457a6793c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:48Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.010634 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.029730 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.044415 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.055618 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.055651 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.055661 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.055684 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.055696 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:49Z","lastTransitionTime":"2025-11-28T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.059892 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.077119 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.091726 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zg94c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a047de37-e5fb-49f1-8b34-94c084894e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zg94c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.108870 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.126626 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.141320 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.159588 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.159645 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.159656 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.159676 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.159690 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:49Z","lastTransitionTime":"2025-11-28T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.175432 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:47Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.401965 6719 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.401993 6719 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403130 6719 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403424 6719 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403845 6719 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 11:53:47.404102 6719 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.405032 6719 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.405169 6719 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.190802 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:49Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.262922 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.262960 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.262969 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.262983 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.262995 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:49Z","lastTransitionTime":"2025-11-28T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.366553 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.366607 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.366621 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.366641 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.366656 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:49Z","lastTransitionTime":"2025-11-28T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.392891 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:49 crc kubenswrapper[5030]: E1128 11:53:49.393030 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.470057 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.470108 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.470126 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.470150 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.470169 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:49Z","lastTransitionTime":"2025-11-28T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.573268 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.573323 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.573343 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.573367 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.573385 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:49Z","lastTransitionTime":"2025-11-28T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.676507 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.676607 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.676629 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.676685 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.676708 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:49Z","lastTransitionTime":"2025-11-28T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.780139 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.780238 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.780264 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.780339 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.780364 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:49Z","lastTransitionTime":"2025-11-28T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.883718 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.883824 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.883864 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.883897 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.883920 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:49Z","lastTransitionTime":"2025-11-28T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.988811 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.988871 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.988888 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.988913 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:49 crc kubenswrapper[5030]: I1128 11:53:49.988933 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:49Z","lastTransitionTime":"2025-11-28T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.092738 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.092802 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.092825 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.092852 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.092869 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:50Z","lastTransitionTime":"2025-11-28T11:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.196269 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.196355 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.196379 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.196411 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.196437 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:50Z","lastTransitionTime":"2025-11-28T11:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.300337 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.300406 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.300422 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.300450 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.300724 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:50Z","lastTransitionTime":"2025-11-28T11:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.392751 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.392807 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:50 crc kubenswrapper[5030]: E1128 11:53:50.392938 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.393010 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:50 crc kubenswrapper[5030]: E1128 11:53:50.393270 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:50 crc kubenswrapper[5030]: E1128 11:53:50.393430 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.403818 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.403877 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.403909 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.403929 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.403953 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:50Z","lastTransitionTime":"2025-11-28T11:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.506826 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.506886 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.506896 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.506919 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.506928 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:50Z","lastTransitionTime":"2025-11-28T11:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.609962 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.610043 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.610068 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.610095 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.610113 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:50Z","lastTransitionTime":"2025-11-28T11:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.714174 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.714254 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.714301 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.714337 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.714366 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:50Z","lastTransitionTime":"2025-11-28T11:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.817559 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.817628 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.817646 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.817675 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.817693 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:50Z","lastTransitionTime":"2025-11-28T11:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.920865 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.920934 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.920952 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.920979 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:50 crc kubenswrapper[5030]: I1128 11:53:50.920997 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:50Z","lastTransitionTime":"2025-11-28T11:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.024253 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.024311 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.024329 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.024355 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.024372 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:51Z","lastTransitionTime":"2025-11-28T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.127404 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.127460 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.127509 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.127532 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.127559 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:51Z","lastTransitionTime":"2025-11-28T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.230914 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.230966 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.230983 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.231006 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.231025 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:51Z","lastTransitionTime":"2025-11-28T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.334319 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.334384 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.334403 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.334431 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.334452 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:51Z","lastTransitionTime":"2025-11-28T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.391969 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:51 crc kubenswrapper[5030]: E1128 11:53:51.392198 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.437311 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.437402 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.437422 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.437448 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.437499 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:51Z","lastTransitionTime":"2025-11-28T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.541059 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.541134 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.541151 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.541189 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.541213 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:51Z","lastTransitionTime":"2025-11-28T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.608158 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs\") pod \"network-metrics-daemon-zg94c\" (UID: \"a047de37-e5fb-49f1-8b34-94c084894e18\") " pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:51 crc kubenswrapper[5030]: E1128 11:53:51.608400 5030 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:53:51 crc kubenswrapper[5030]: E1128 11:53:51.608584 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs podName:a047de37-e5fb-49f1-8b34-94c084894e18 nodeName:}" failed. No retries permitted until 2025-11-28 11:54:07.608545987 +0000 UTC m=+65.550288710 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs") pod "network-metrics-daemon-zg94c" (UID: "a047de37-e5fb-49f1-8b34-94c084894e18") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.644968 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.645048 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.645075 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.645105 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.645128 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:51Z","lastTransitionTime":"2025-11-28T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.747950 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.748017 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.748040 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.748068 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.748088 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:51Z","lastTransitionTime":"2025-11-28T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.851215 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.851288 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.851312 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.851341 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.851362 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:51Z","lastTransitionTime":"2025-11-28T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.988703 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.988758 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.988768 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.988804 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:51 crc kubenswrapper[5030]: I1128 11:53:51.988814 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:51Z","lastTransitionTime":"2025-11-28T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.092451 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.092557 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.092580 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.092609 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.092627 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:52Z","lastTransitionTime":"2025-11-28T11:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.114926 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:53:52 crc kubenswrapper[5030]: E1128 11:53:52.115169 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:54:24.115131369 +0000 UTC m=+82.056874082 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.196507 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.196595 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.196622 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.196649 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.196671 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:52Z","lastTransitionTime":"2025-11-28T11:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.216667 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.216738 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.216837 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.216903 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:52 crc kubenswrapper[5030]: E1128 11:53:52.216926 5030 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:53:52 crc kubenswrapper[5030]: E1128 11:53:52.217078 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:54:24.217039696 +0000 UTC m=+82.158782579 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:53:52 crc kubenswrapper[5030]: E1128 11:53:52.217126 5030 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:53:52 crc kubenswrapper[5030]: E1128 11:53:52.217145 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:53:52 crc kubenswrapper[5030]: E1128 11:53:52.217204 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:53:52 crc kubenswrapper[5030]: E1128 11:53:52.217232 5030 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:52 crc kubenswrapper[5030]: E1128 11:53:52.217260 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:54:24.217221082 +0000 UTC m=+82.158963955 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:53:52 crc kubenswrapper[5030]: E1128 11:53:52.217144 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:53:52 crc kubenswrapper[5030]: E1128 11:53:52.217328 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 11:54:24.217295714 +0000 UTC m=+82.159038577 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:52 crc kubenswrapper[5030]: E1128 11:53:52.217360 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:53:52 crc kubenswrapper[5030]: E1128 11:53:52.217395 5030 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:52 crc kubenswrapper[5030]: E1128 11:53:52.217509 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 11:54:24.217462398 +0000 UTC m=+82.159205111 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.300918 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.301006 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.301028 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.301057 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.301079 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:52Z","lastTransitionTime":"2025-11-28T11:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.392685 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.392798 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.392710 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:52 crc kubenswrapper[5030]: E1128 11:53:52.393154 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:52 crc kubenswrapper[5030]: E1128 11:53:52.393389 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:52 crc kubenswrapper[5030]: E1128 11:53:52.393703 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.404016 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.404082 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.404100 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.404125 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.404146 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:52Z","lastTransitionTime":"2025-11-28T11:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.417713 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.436083 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.453872 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.476556 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.495217 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.506318 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.506343 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.506355 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.506372 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.506385 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:52Z","lastTransitionTime":"2025-11-28T11:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.517994 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.532203 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.544420 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.567676 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:47Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.401965 6719 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.401993 6719 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403130 6719 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403424 6719 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403845 6719 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 11:53:47.404102 6719 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.405032 6719 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.405169 6719 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.576704 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.587841 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zg94c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a047de37-e5fb-49f1-8b34-94c084894e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zg94c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.598449 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ee8a59-861f-45a9-899b-a14b271beeec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4941837db92a86711049d8127c0c54d85666d4657fd632275b753d6cf824402a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e0ee0c11239d02d532be8f53740151a5473ce01cfeff9bfd74d14fd2f23e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://115d1d02ee85fac531c03ead7408d14eee3d97a5ded22b9c667d533ab91d5a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.608885 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.608929 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.608941 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.608959 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.608968 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:52Z","lastTransitionTime":"2025-11-28T11:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.615867 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.631081 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.647129 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.665445 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.676954 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e157b8267fdc717cd296285288fb417fc468eab880eb1c4ed7a825434b5fc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4949e1c306f6dcea662ddb9fa5a17acb42cac5744c7c60c87eee9457a6793c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.695806 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:52Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.711837 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.711884 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.711931 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.711957 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.711974 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:52Z","lastTransitionTime":"2025-11-28T11:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.814558 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.814610 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.814626 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.814648 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.814665 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:52Z","lastTransitionTime":"2025-11-28T11:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.917621 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.917682 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.917696 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.917716 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:52 crc kubenswrapper[5030]: I1128 11:53:52.917730 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:52Z","lastTransitionTime":"2025-11-28T11:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.020950 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.021013 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.021031 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.021052 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.021065 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:53Z","lastTransitionTime":"2025-11-28T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.124376 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.124440 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.124463 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.124524 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.124542 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:53Z","lastTransitionTime":"2025-11-28T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.228145 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.228665 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.228691 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.228769 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.228802 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:53Z","lastTransitionTime":"2025-11-28T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.331531 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.331581 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.331597 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.331619 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.331636 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:53Z","lastTransitionTime":"2025-11-28T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.392210 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:53 crc kubenswrapper[5030]: E1128 11:53:53.392431 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.434228 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.434291 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.434308 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.434332 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.434349 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:53Z","lastTransitionTime":"2025-11-28T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.538050 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.538101 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.538120 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.538143 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.538160 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:53Z","lastTransitionTime":"2025-11-28T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.641394 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.641454 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.641497 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.641523 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.641540 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:53Z","lastTransitionTime":"2025-11-28T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.744084 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.744153 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.744176 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.744205 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.744225 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:53Z","lastTransitionTime":"2025-11-28T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.847083 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.847132 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.847144 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.847160 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.847174 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:53Z","lastTransitionTime":"2025-11-28T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.949636 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.949677 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.949685 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.949703 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:53 crc kubenswrapper[5030]: I1128 11:53:53.949712 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:53Z","lastTransitionTime":"2025-11-28T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.052991 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.053094 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.053131 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.053166 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.053193 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:54Z","lastTransitionTime":"2025-11-28T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.156436 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.156519 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.156536 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.156559 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.156575 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:54Z","lastTransitionTime":"2025-11-28T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.224542 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.224604 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.224650 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.224676 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.224695 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:54Z","lastTransitionTime":"2025-11-28T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:54 crc kubenswrapper[5030]: E1128 11:53:54.249326 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:54Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.254948 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.255010 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.255031 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.255052 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.255065 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:54Z","lastTransitionTime":"2025-11-28T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:54 crc kubenswrapper[5030]: E1128 11:53:54.276141 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:54Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.281205 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.281269 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.281286 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.281313 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.281331 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:54Z","lastTransitionTime":"2025-11-28T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:54 crc kubenswrapper[5030]: E1128 11:53:54.304023 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:54Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.309348 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.309390 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.309400 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.309417 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.309428 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:54Z","lastTransitionTime":"2025-11-28T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:54 crc kubenswrapper[5030]: E1128 11:53:54.329288 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:54Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.335143 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.335204 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.335226 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.335256 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.335280 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:54Z","lastTransitionTime":"2025-11-28T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:54 crc kubenswrapper[5030]: E1128 11:53:54.354426 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:53:54Z is after 2025-08-24T17:21:41Z" Nov 28 11:53:54 crc kubenswrapper[5030]: E1128 11:53:54.354630 5030 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.356336 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.356573 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.356742 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.356975 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.357191 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:54Z","lastTransitionTime":"2025-11-28T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.392577 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.392602 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:54 crc kubenswrapper[5030]: E1128 11:53:54.392845 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.392897 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:54 crc kubenswrapper[5030]: E1128 11:53:54.393032 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:54 crc kubenswrapper[5030]: E1128 11:53:54.393167 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.460830 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.461220 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.461379 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.461602 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.461830 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:54Z","lastTransitionTime":"2025-11-28T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.564669 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.564740 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.564766 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.564801 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.564827 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:54Z","lastTransitionTime":"2025-11-28T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.668149 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.668364 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.668385 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.668416 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.668441 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:54Z","lastTransitionTime":"2025-11-28T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.772436 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.772642 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.772670 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.772696 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.772717 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:54Z","lastTransitionTime":"2025-11-28T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.875733 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.875835 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.875855 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.875880 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.875901 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:54Z","lastTransitionTime":"2025-11-28T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.979954 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.980025 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.980043 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.980072 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:54 crc kubenswrapper[5030]: I1128 11:53:54.980091 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:54Z","lastTransitionTime":"2025-11-28T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.083563 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.083924 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.084057 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.084296 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.084562 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:55Z","lastTransitionTime":"2025-11-28T11:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.188372 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.188429 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.188447 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.188500 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.188521 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:55Z","lastTransitionTime":"2025-11-28T11:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.291823 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.292230 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.292403 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.292597 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.292727 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:55Z","lastTransitionTime":"2025-11-28T11:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.392528 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:55 crc kubenswrapper[5030]: E1128 11:53:55.393656 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.395790 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.395823 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.395834 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.395849 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.395860 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:55Z","lastTransitionTime":"2025-11-28T11:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.501035 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.501108 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.501128 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.501159 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.501186 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:55Z","lastTransitionTime":"2025-11-28T11:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.604776 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.605092 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.605209 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.605339 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.605509 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:55Z","lastTransitionTime":"2025-11-28T11:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.708420 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.708511 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.708531 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.708558 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.708575 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:55Z","lastTransitionTime":"2025-11-28T11:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.812331 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.812383 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.812401 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.812425 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.812443 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:55Z","lastTransitionTime":"2025-11-28T11:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.924720 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.924800 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.924820 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.924847 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:55 crc kubenswrapper[5030]: I1128 11:53:55.924868 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:55Z","lastTransitionTime":"2025-11-28T11:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.028054 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.028142 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.028162 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.028190 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.028213 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:56Z","lastTransitionTime":"2025-11-28T11:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.131693 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.131774 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.131793 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.131822 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.131840 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:56Z","lastTransitionTime":"2025-11-28T11:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.236109 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.236186 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.236203 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.236230 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.236248 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:56Z","lastTransitionTime":"2025-11-28T11:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.339927 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.339989 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.340005 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.340031 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.340048 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:56Z","lastTransitionTime":"2025-11-28T11:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.392404 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.392450 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.392685 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:56 crc kubenswrapper[5030]: E1128 11:53:56.392672 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:56 crc kubenswrapper[5030]: E1128 11:53:56.392808 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:56 crc kubenswrapper[5030]: E1128 11:53:56.393097 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.443446 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.443513 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.443525 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.443542 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.443557 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:56Z","lastTransitionTime":"2025-11-28T11:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.547818 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.547900 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.547944 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.547984 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.548010 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:56Z","lastTransitionTime":"2025-11-28T11:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.652773 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.652940 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.652966 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.652999 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.653018 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:56Z","lastTransitionTime":"2025-11-28T11:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.757204 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.757285 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.757304 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.757332 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.757352 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:56Z","lastTransitionTime":"2025-11-28T11:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.862630 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.862809 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.862837 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.862871 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.862901 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:56Z","lastTransitionTime":"2025-11-28T11:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.966856 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.966920 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.966933 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.966953 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:56 crc kubenswrapper[5030]: I1128 11:53:56.966966 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:56Z","lastTransitionTime":"2025-11-28T11:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.071660 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.071738 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.071751 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.071777 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.071799 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:57Z","lastTransitionTime":"2025-11-28T11:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.175681 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.175747 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.175769 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.175800 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.175823 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:57Z","lastTransitionTime":"2025-11-28T11:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.278694 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.278745 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.278758 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.278777 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.278793 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:57Z","lastTransitionTime":"2025-11-28T11:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.382723 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.382776 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.382792 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.382816 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.382831 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:57Z","lastTransitionTime":"2025-11-28T11:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.392496 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:57 crc kubenswrapper[5030]: E1128 11:53:57.392724 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.486986 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.487053 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.487078 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.487107 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.487133 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:57Z","lastTransitionTime":"2025-11-28T11:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.590345 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.590396 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.590406 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.590424 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.590436 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:57Z","lastTransitionTime":"2025-11-28T11:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.694715 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.694785 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.694803 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.694829 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.694847 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:57Z","lastTransitionTime":"2025-11-28T11:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.798384 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.798453 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.798496 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.798523 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.798541 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:57Z","lastTransitionTime":"2025-11-28T11:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.901347 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.901421 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.901440 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.901498 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:57 crc kubenswrapper[5030]: I1128 11:53:57.901527 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:57Z","lastTransitionTime":"2025-11-28T11:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.004961 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.005010 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.005021 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.005040 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.005052 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:58Z","lastTransitionTime":"2025-11-28T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.108878 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.108959 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.108987 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.109152 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.109181 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:58Z","lastTransitionTime":"2025-11-28T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.213575 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.213647 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.213732 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.213766 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.213789 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:58Z","lastTransitionTime":"2025-11-28T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.317040 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.317094 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.317106 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.317125 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.317141 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:58Z","lastTransitionTime":"2025-11-28T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.392736 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.392779 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.392737 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:53:58 crc kubenswrapper[5030]: E1128 11:53:58.392903 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:53:58 crc kubenswrapper[5030]: E1128 11:53:58.393528 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:53:58 crc kubenswrapper[5030]: E1128 11:53:58.393774 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.420864 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.420937 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.421688 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.421720 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.421740 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:58Z","lastTransitionTime":"2025-11-28T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.525133 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.525198 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.525216 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.525241 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.525260 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:58Z","lastTransitionTime":"2025-11-28T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.629083 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.629147 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.629168 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.629194 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.629215 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:58Z","lastTransitionTime":"2025-11-28T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.733376 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.733437 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.733454 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.733499 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.733545 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:58Z","lastTransitionTime":"2025-11-28T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.836191 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.836249 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.836268 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.836299 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.836320 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:58Z","lastTransitionTime":"2025-11-28T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.939915 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.940007 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.940030 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.940058 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:58 crc kubenswrapper[5030]: I1128 11:53:58.940077 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:58Z","lastTransitionTime":"2025-11-28T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.043189 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.043275 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.043296 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.043325 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.043344 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:59Z","lastTransitionTime":"2025-11-28T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.146828 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.146883 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.146900 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.146925 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.146946 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:59Z","lastTransitionTime":"2025-11-28T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.250331 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.250408 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.250425 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.250456 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.250517 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:59Z","lastTransitionTime":"2025-11-28T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.354022 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.354098 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.354122 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.354151 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.354173 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:59Z","lastTransitionTime":"2025-11-28T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.392368 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:53:59 crc kubenswrapper[5030]: E1128 11:53:59.392741 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.457221 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.457272 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.457286 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.457306 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.457322 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:59Z","lastTransitionTime":"2025-11-28T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.560382 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.560442 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.560461 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.560526 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.560545 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:59Z","lastTransitionTime":"2025-11-28T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.664184 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.664249 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.664268 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.664294 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.664310 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:59Z","lastTransitionTime":"2025-11-28T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.768288 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.768358 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.768377 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.768402 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.768420 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:59Z","lastTransitionTime":"2025-11-28T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.872511 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.872586 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.872610 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.872640 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.872661 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:59Z","lastTransitionTime":"2025-11-28T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.976075 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.976147 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.976173 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.976204 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:53:59 crc kubenswrapper[5030]: I1128 11:53:59.976225 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:53:59Z","lastTransitionTime":"2025-11-28T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.079837 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.079906 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.079920 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.079948 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.079967 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:00Z","lastTransitionTime":"2025-11-28T11:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.182807 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.182881 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.182902 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.182929 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.182948 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:00Z","lastTransitionTime":"2025-11-28T11:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.286388 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.286451 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.286502 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.286528 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.286547 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:00Z","lastTransitionTime":"2025-11-28T11:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.390629 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.390693 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.390710 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.390736 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.390754 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:00Z","lastTransitionTime":"2025-11-28T11:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.392977 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.393082 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:00 crc kubenswrapper[5030]: E1128 11:54:00.393206 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.393248 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:00 crc kubenswrapper[5030]: E1128 11:54:00.393391 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:00 crc kubenswrapper[5030]: E1128 11:54:00.394051 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.394510 5030 scope.go:117] "RemoveContainer" containerID="14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10" Nov 28 11:54:00 crc kubenswrapper[5030]: E1128 11:54:00.394794 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.494230 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.494304 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.494328 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.494364 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.494394 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:00Z","lastTransitionTime":"2025-11-28T11:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.598234 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.598308 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.598330 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.598360 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.598383 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:00Z","lastTransitionTime":"2025-11-28T11:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.702051 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.702097 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.702111 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.702133 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.702147 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:00Z","lastTransitionTime":"2025-11-28T11:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.805237 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.805320 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.805346 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.805377 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.805401 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:00Z","lastTransitionTime":"2025-11-28T11:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.914810 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.914868 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.914893 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.914940 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:00 crc kubenswrapper[5030]: I1128 11:54:00.914966 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:00Z","lastTransitionTime":"2025-11-28T11:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.018691 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.018768 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.018789 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.018820 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.018841 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:01Z","lastTransitionTime":"2025-11-28T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.122127 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.122197 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.122219 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.122246 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.122267 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:01Z","lastTransitionTime":"2025-11-28T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.225254 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.225292 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.225300 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.225315 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.225324 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:01Z","lastTransitionTime":"2025-11-28T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.329156 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.329234 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.329257 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.329290 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.329313 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:01Z","lastTransitionTime":"2025-11-28T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.392634 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:01 crc kubenswrapper[5030]: E1128 11:54:01.392858 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.433013 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.433071 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.433084 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.433104 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.433116 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:01Z","lastTransitionTime":"2025-11-28T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.536590 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.536668 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.536686 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.536712 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.536730 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:01Z","lastTransitionTime":"2025-11-28T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.640188 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.640244 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.640260 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.640280 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.640294 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:01Z","lastTransitionTime":"2025-11-28T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.744121 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.744198 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.744217 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.744245 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.744269 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:01Z","lastTransitionTime":"2025-11-28T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.846834 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.846915 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.846941 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.846972 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.846999 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:01Z","lastTransitionTime":"2025-11-28T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.950106 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.950173 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.950192 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.950222 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:01 crc kubenswrapper[5030]: I1128 11:54:01.950242 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:01Z","lastTransitionTime":"2025-11-28T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.053953 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.054012 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.054034 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.054059 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.054077 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:02Z","lastTransitionTime":"2025-11-28T11:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.157167 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.157240 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.157258 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.157287 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.157307 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:02Z","lastTransitionTime":"2025-11-28T11:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.260795 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.260866 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.260885 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.260909 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.260926 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:02Z","lastTransitionTime":"2025-11-28T11:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.363647 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.363704 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.363716 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.363735 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.363748 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:02Z","lastTransitionTime":"2025-11-28T11:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.392154 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.392250 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:02 crc kubenswrapper[5030]: E1128 11:54:02.392396 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.392414 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:02 crc kubenswrapper[5030]: E1128 11:54:02.392526 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:02 crc kubenswrapper[5030]: E1128 11:54:02.392607 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.416793 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.433746 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.453645 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.467996 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.468069 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.468093 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.468173 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.468200 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:02Z","lastTransitionTime":"2025-11-28T11:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.471901 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.496751 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.515832 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.528780 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.542754 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.561188 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:47Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.401965 6719 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.401993 6719 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403130 6719 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403424 6719 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403845 6719 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 11:53:47.404102 6719 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.405032 6719 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.405169 6719 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.572186 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.572270 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.572288 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.572347 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.572366 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:02Z","lastTransitionTime":"2025-11-28T11:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.574766 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.585294 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zg94c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a047de37-e5fb-49f1-8b34-94c084894e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zg94c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.597001 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ee8a59-861f-45a9-899b-a14b271beeec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4941837db92a86711049d8127c0c54d85666d4657fd632275b753d6cf824402a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e0ee0c11239d02d532be8f53740151a5473ce01cfeff9bfd74d14fd2f23e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://115d1d02ee85fac531c03ead7408d14eee3d97a5ded22b9c667d533ab91d5a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.613977 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.629393 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.642402 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.666679 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.678158 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.678209 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.678242 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.678270 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.678287 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:02Z","lastTransitionTime":"2025-11-28T11:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.687193 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e157b8267fdc717cd296285288fb417fc468eab880eb1c4ed7a825434b5fc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4949e1c306f6dcea662ddb9fa5a17acb42cac5744c7c60c87eee9457a6793c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.718502 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:02Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.780944 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.780991 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.781033 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.781053 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.781065 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:02Z","lastTransitionTime":"2025-11-28T11:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.883692 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.883750 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.883770 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.883798 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.883815 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:02Z","lastTransitionTime":"2025-11-28T11:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.986363 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.986399 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.986408 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.986422 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:02 crc kubenswrapper[5030]: I1128 11:54:02.986430 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:02Z","lastTransitionTime":"2025-11-28T11:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.090619 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.090661 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.090673 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.090690 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.090701 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:03Z","lastTransitionTime":"2025-11-28T11:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.194048 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.194087 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.194098 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.194113 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.194124 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:03Z","lastTransitionTime":"2025-11-28T11:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.296999 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.297049 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.297062 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.297081 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.297097 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:03Z","lastTransitionTime":"2025-11-28T11:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.392608 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:03 crc kubenswrapper[5030]: E1128 11:54:03.392898 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.400365 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.400424 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.400436 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.400457 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.400499 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:03Z","lastTransitionTime":"2025-11-28T11:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.503992 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.504657 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.504705 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.504734 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.504751 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:03Z","lastTransitionTime":"2025-11-28T11:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.612248 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.612279 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.612287 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.612302 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.612310 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:03Z","lastTransitionTime":"2025-11-28T11:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.715496 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.715572 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.715591 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.715620 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.715644 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:03Z","lastTransitionTime":"2025-11-28T11:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.819263 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.819337 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.819354 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.819380 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.819399 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:03Z","lastTransitionTime":"2025-11-28T11:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.922007 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.922070 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.922087 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.922114 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:03 crc kubenswrapper[5030]: I1128 11:54:03.922133 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:03Z","lastTransitionTime":"2025-11-28T11:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.025139 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.025280 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.025298 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.025322 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.025343 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:04Z","lastTransitionTime":"2025-11-28T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.128933 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.129003 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.129026 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.129057 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.129076 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:04Z","lastTransitionTime":"2025-11-28T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.232365 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.232431 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.232453 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.232516 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.232541 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:04Z","lastTransitionTime":"2025-11-28T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.335948 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.336012 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.336026 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.336045 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.336059 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:04Z","lastTransitionTime":"2025-11-28T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.392556 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.392639 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.392565 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:04 crc kubenswrapper[5030]: E1128 11:54:04.392834 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:04 crc kubenswrapper[5030]: E1128 11:54:04.393174 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:04 crc kubenswrapper[5030]: E1128 11:54:04.393364 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.438655 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.438726 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.438749 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.438785 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.438808 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:04Z","lastTransitionTime":"2025-11-28T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.542338 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.542391 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.542442 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.542501 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.542533 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:04Z","lastTransitionTime":"2025-11-28T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.645533 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.645654 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.645706 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.645732 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.645750 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:04Z","lastTransitionTime":"2025-11-28T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.689086 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.689159 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.689181 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.689211 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.689233 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:04Z","lastTransitionTime":"2025-11-28T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:04 crc kubenswrapper[5030]: E1128 11:54:04.713063 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.718782 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.718837 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.718855 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.718880 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.718898 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:04Z","lastTransitionTime":"2025-11-28T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:04 crc kubenswrapper[5030]: E1128 11:54:04.737631 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.744292 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.744350 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.744367 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.744393 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.744413 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:04Z","lastTransitionTime":"2025-11-28T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:04 crc kubenswrapper[5030]: E1128 11:54:04.764358 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.770440 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.770684 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.770848 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.770967 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.771079 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:04Z","lastTransitionTime":"2025-11-28T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:04 crc kubenswrapper[5030]: E1128 11:54:04.795746 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.801711 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.801763 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.801780 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.801807 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.801830 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:04Z","lastTransitionTime":"2025-11-28T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:04 crc kubenswrapper[5030]: E1128 11:54:04.824606 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:04Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:04 crc kubenswrapper[5030]: E1128 11:54:04.824995 5030 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.828297 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.828372 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.828400 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.828434 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.828460 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:04Z","lastTransitionTime":"2025-11-28T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.931244 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.931326 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.931348 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.931379 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:04 crc kubenswrapper[5030]: I1128 11:54:04.931400 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:04Z","lastTransitionTime":"2025-11-28T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.036580 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.036663 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.036689 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.036723 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.036749 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:05Z","lastTransitionTime":"2025-11-28T11:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.140927 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.140994 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.141012 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.141039 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.141057 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:05Z","lastTransitionTime":"2025-11-28T11:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.245239 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.245329 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.245349 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.245376 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.245397 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:05Z","lastTransitionTime":"2025-11-28T11:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.349961 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.350013 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.350024 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.350041 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.350053 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:05Z","lastTransitionTime":"2025-11-28T11:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.395245 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:05 crc kubenswrapper[5030]: E1128 11:54:05.396092 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.454509 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.454683 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.454696 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.454715 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.454728 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:05Z","lastTransitionTime":"2025-11-28T11:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.558958 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.558994 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.559005 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.559021 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.559033 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:05Z","lastTransitionTime":"2025-11-28T11:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.663530 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.663574 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.663585 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.663603 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.663614 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:05Z","lastTransitionTime":"2025-11-28T11:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.767968 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.768007 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.768029 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.768047 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.768059 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:05Z","lastTransitionTime":"2025-11-28T11:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.870856 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.870907 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.870920 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.870938 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.870950 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:05Z","lastTransitionTime":"2025-11-28T11:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.974729 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.974798 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.974823 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.974947 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:05 crc kubenswrapper[5030]: I1128 11:54:05.974998 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:05Z","lastTransitionTime":"2025-11-28T11:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.077511 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.077559 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.077569 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.077586 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.077596 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:06Z","lastTransitionTime":"2025-11-28T11:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.180346 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.180421 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.180441 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.180507 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.180535 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:06Z","lastTransitionTime":"2025-11-28T11:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.282968 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.283031 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.283045 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.283069 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.283083 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:06Z","lastTransitionTime":"2025-11-28T11:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.387430 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.387510 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.387529 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.387552 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.387568 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:06Z","lastTransitionTime":"2025-11-28T11:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.392787 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.392817 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.392834 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:06 crc kubenswrapper[5030]: E1128 11:54:06.392931 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:06 crc kubenswrapper[5030]: E1128 11:54:06.393055 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:06 crc kubenswrapper[5030]: E1128 11:54:06.393130 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.499119 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.499188 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.499200 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.499222 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.499235 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:06Z","lastTransitionTime":"2025-11-28T11:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.601399 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.601457 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.601505 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.601530 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.601548 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:06Z","lastTransitionTime":"2025-11-28T11:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.704100 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.704153 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.704347 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.704373 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.704386 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:06Z","lastTransitionTime":"2025-11-28T11:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.807573 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.807663 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.807701 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.807732 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.807751 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:06Z","lastTransitionTime":"2025-11-28T11:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.911154 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.911245 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.911275 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.911312 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:06 crc kubenswrapper[5030]: I1128 11:54:06.911337 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:06Z","lastTransitionTime":"2025-11-28T11:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.013405 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.013447 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.013456 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.013487 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.013497 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:07Z","lastTransitionTime":"2025-11-28T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.116293 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.116333 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.116341 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.116355 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.116365 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:07Z","lastTransitionTime":"2025-11-28T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.219216 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.219251 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.219259 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.219273 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.219284 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:07Z","lastTransitionTime":"2025-11-28T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.321765 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.321812 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.321832 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.321872 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.321886 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:07Z","lastTransitionTime":"2025-11-28T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.392651 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:07 crc kubenswrapper[5030]: E1128 11:54:07.392864 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.424437 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.424489 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.424498 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.424512 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.424522 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:07Z","lastTransitionTime":"2025-11-28T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.527605 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.527677 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.527696 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.527724 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.527740 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:07Z","lastTransitionTime":"2025-11-28T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.635927 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.636411 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.636553 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.636643 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.636722 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:07Z","lastTransitionTime":"2025-11-28T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.697824 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs\") pod \"network-metrics-daemon-zg94c\" (UID: \"a047de37-e5fb-49f1-8b34-94c084894e18\") " pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:07 crc kubenswrapper[5030]: E1128 11:54:07.698160 5030 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:54:07 crc kubenswrapper[5030]: E1128 11:54:07.698327 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs podName:a047de37-e5fb-49f1-8b34-94c084894e18 nodeName:}" failed. No retries permitted until 2025-11-28 11:54:39.698286864 +0000 UTC m=+97.640029587 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs") pod "network-metrics-daemon-zg94c" (UID: "a047de37-e5fb-49f1-8b34-94c084894e18") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.739794 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.739862 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.739875 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.739901 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.739917 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:07Z","lastTransitionTime":"2025-11-28T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.842976 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.843048 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.843071 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.843104 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.843125 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:07Z","lastTransitionTime":"2025-11-28T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.946229 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.946262 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.946272 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.946287 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:07 crc kubenswrapper[5030]: I1128 11:54:07.946297 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:07Z","lastTransitionTime":"2025-11-28T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.049389 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.049431 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.049439 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.049455 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.049483 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:08Z","lastTransitionTime":"2025-11-28T11:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.151607 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.151650 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.151660 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.151677 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.151691 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:08Z","lastTransitionTime":"2025-11-28T11:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.254275 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.254313 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.254322 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.254337 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.254346 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:08Z","lastTransitionTime":"2025-11-28T11:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.357444 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.357515 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.357525 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.357540 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.357553 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:08Z","lastTransitionTime":"2025-11-28T11:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.392923 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:08 crc kubenswrapper[5030]: E1128 11:54:08.393053 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.393093 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.392919 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:08 crc kubenswrapper[5030]: E1128 11:54:08.393380 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:08 crc kubenswrapper[5030]: E1128 11:54:08.393655 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.406549 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.464404 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.464502 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.464527 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.464557 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.464580 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:08Z","lastTransitionTime":"2025-11-28T11:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.567228 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.567317 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.567342 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.567379 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.567402 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:08Z","lastTransitionTime":"2025-11-28T11:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.669492 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.669525 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.669534 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.669550 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.669562 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:08Z","lastTransitionTime":"2025-11-28T11:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.772645 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.772713 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.772733 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.772757 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.772785 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:08Z","lastTransitionTime":"2025-11-28T11:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.876529 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.876573 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.876584 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.876608 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.876626 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:08Z","lastTransitionTime":"2025-11-28T11:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.908031 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfz78_4ee84379-3754-48c5-aaab-15dbc36caa16/kube-multus/0.log" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.908095 5030 generic.go:334] "Generic (PLEG): container finished" podID="4ee84379-3754-48c5-aaab-15dbc36caa16" containerID="b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856" exitCode=1 Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.908148 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kfz78" event={"ID":"4ee84379-3754-48c5-aaab-15dbc36caa16","Type":"ContainerDied","Data":"b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856"} Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.908636 5030 scope.go:117] "RemoveContainer" containerID="b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.940521 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:54:08Z\\\",\\\"message\\\":\\\"2025-11-28T11:53:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eea0ce15-f0ed-4f25-8e82-7eb04deee8c7\\\\n2025-11-28T11:53:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eea0ce15-f0ed-4f25-8e82-7eb04deee8c7 to /host/opt/cni/bin/\\\\n2025-11-28T11:53:22Z [verbose] multus-daemon started\\\\n2025-11-28T11:53:22Z [verbose] Readiness Indicator file check\\\\n2025-11-28T11:54:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.956047 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.972627 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.981050 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.981105 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.981124 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.981152 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.981170 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:08Z","lastTransitionTime":"2025-11-28T11:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:08 crc kubenswrapper[5030]: I1128 11:54:08.985067 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.000150 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:08Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.034991 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:47Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.401965 6719 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.401993 6719 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403130 6719 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403424 6719 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403845 6719 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 11:53:47.404102 6719 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.405032 6719 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.405169 6719 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.046518 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.059890 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zg94c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a047de37-e5fb-49f1-8b34-94c084894e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zg94c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.077553 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3519649d-fbcc-44c1-844a-a583187adfe4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15347ebd6790bbea101cf7c1648c4dca835235e58135b355c07606ec6c449ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29e78db6cc2e04a56ec70a310fda7bce1ca32eb00ff65221b3eef96fac81afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29e78db6cc2e04a56ec70a310fda7bce1ca32eb00ff65221b3eef96fac81afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.083724 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.083848 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.083907 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.083945 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.083961 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:09Z","lastTransitionTime":"2025-11-28T11:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.096169 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.116099 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.131941 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.153390 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.165139 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ee8a59-861f-45a9-899b-a14b271beeec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4941837db92a86711049d8127c0c54d85666d4657fd632275b753d6cf824402a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e0ee0c11239d02d532be8f53740151a5473ce01cfeff9bfd74d14fd2f23e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://115d1d02ee85fac531c03ead7408d14eee3d97a5ded22b9c667d533ab91d5a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.179820 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e157b8267fdc717cd296285288fb417fc468eab880eb1c4ed7a825434b5fc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4949e1c306f6dcea662ddb9fa5a17acb42cac5744c7c60c87eee9457a6793c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.186677 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.186722 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.186736 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.186757 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.186772 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:09Z","lastTransitionTime":"2025-11-28T11:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.210204 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.228925 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.244498 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.263608 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.289010 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.289062 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.289075 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.289098 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.289114 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:09Z","lastTransitionTime":"2025-11-28T11:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.391957 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.391995 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.392006 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.392021 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.392023 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:09 crc kubenswrapper[5030]: E1128 11:54:09.392176 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.392032 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:09Z","lastTransitionTime":"2025-11-28T11:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.495552 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.495607 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.495626 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.495649 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.495667 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:09Z","lastTransitionTime":"2025-11-28T11:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.599306 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.599358 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.599369 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.599391 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.599404 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:09Z","lastTransitionTime":"2025-11-28T11:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.702105 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.702137 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.702145 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.702157 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.702166 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:09Z","lastTransitionTime":"2025-11-28T11:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.804104 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.804163 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.804172 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.804188 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.804198 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:09Z","lastTransitionTime":"2025-11-28T11:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.906538 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.906585 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.906601 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.906622 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.906659 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:09Z","lastTransitionTime":"2025-11-28T11:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.914274 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfz78_4ee84379-3754-48c5-aaab-15dbc36caa16/kube-multus/0.log" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.914335 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kfz78" event={"ID":"4ee84379-3754-48c5-aaab-15dbc36caa16","Type":"ContainerStarted","Data":"7589f5a1f3ffa2039e76ad57648413ed1c1a7b0047e023696616bf1ac679be7e"} Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.942289 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.958552 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e157b8267fdc717cd296285288fb417fc468eab880eb1c4ed7a825434b5fc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4949e1c306f6dcea662ddb9fa5a17acb42cac5744c7c60c87eee9457a6793c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:09 crc kubenswrapper[5030]: I1128 11:54:09.987953 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:09Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.005371 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.009170 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.009202 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.009215 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.009233 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.009245 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:10Z","lastTransitionTime":"2025-11-28T11:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.021077 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.033721 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.047064 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7589f5a1f3ffa2039e76ad57648413ed1c1a7b0047e023696616bf1ac679be7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:54:08Z\\\",\\\"message\\\":\\\"2025-11-28T11:53:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eea0ce15-f0ed-4f25-8e82-7eb04deee8c7\\\\n2025-11-28T11:53:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eea0ce15-f0ed-4f25-8e82-7eb04deee8c7 to /host/opt/cni/bin/\\\\n2025-11-28T11:53:22Z [verbose] multus-daemon started\\\\n2025-11-28T11:53:22Z [verbose] Readiness Indicator file check\\\\n2025-11-28T11:54:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.064281 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.080635 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.095913 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.112935 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.113051 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.113077 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.113102 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.113123 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:10Z","lastTransitionTime":"2025-11-28T11:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.116412 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.157223 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:47Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.401965 6719 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.401993 6719 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403130 6719 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403424 6719 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403845 6719 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 11:53:47.404102 6719 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.405032 6719 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.405169 6719 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.176561 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.189611 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zg94c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a047de37-e5fb-49f1-8b34-94c084894e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zg94c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.205651 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3519649d-fbcc-44c1-844a-a583187adfe4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15347ebd6790bbea101cf7c1648c4dca835235e58135b355c07606ec6c449ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29e78db6cc2e04a56ec70a310fda7bce1ca32eb00ff65221b3eef96fac81afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29e78db6cc2e04a56ec70a310fda7bce1ca32eb00ff65221b3eef96fac81afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.215991 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.216038 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.216050 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.216067 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.216081 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:10Z","lastTransitionTime":"2025-11-28T11:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.225252 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.242676 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.255293 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.265357 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ee8a59-861f-45a9-899b-a14b271beeec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4941837db92a86711049d8127c0c54d85666d4657fd632275b753d6cf824402a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e0ee0c11239d02d532be8f53740151a5473ce01cfeff9bfd74d14fd2f23e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://115d1d02ee85fac531c03ead7408d14eee3d97a5ded22b9c667d533ab91d5a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:10Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.318436 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.318490 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.318508 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.318526 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.318537 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:10Z","lastTransitionTime":"2025-11-28T11:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.392057 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.392057 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:10 crc kubenswrapper[5030]: E1128 11:54:10.392205 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.392378 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:10 crc kubenswrapper[5030]: E1128 11:54:10.392537 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:10 crc kubenswrapper[5030]: E1128 11:54:10.392783 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.421568 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.421598 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.421607 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.421619 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.421629 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:10Z","lastTransitionTime":"2025-11-28T11:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.525575 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.525623 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.525636 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.525659 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.525673 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:10Z","lastTransitionTime":"2025-11-28T11:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.628349 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.628390 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.628402 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.628419 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.628430 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:10Z","lastTransitionTime":"2025-11-28T11:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.731932 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.732001 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.732020 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.732046 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.732063 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:10Z","lastTransitionTime":"2025-11-28T11:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.835639 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.835708 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.835721 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.835742 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.835758 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:10Z","lastTransitionTime":"2025-11-28T11:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.939945 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.940019 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.940035 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.940054 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:10 crc kubenswrapper[5030]: I1128 11:54:10.940068 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:10Z","lastTransitionTime":"2025-11-28T11:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.042499 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.042570 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.042580 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.042599 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.042612 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:11Z","lastTransitionTime":"2025-11-28T11:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.145762 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.145846 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.145872 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.145903 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.145926 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:11Z","lastTransitionTime":"2025-11-28T11:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.249512 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.249570 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.249616 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.249643 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.249661 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:11Z","lastTransitionTime":"2025-11-28T11:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.352662 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.352719 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.352730 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.352754 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.352768 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:11Z","lastTransitionTime":"2025-11-28T11:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.392858 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:11 crc kubenswrapper[5030]: E1128 11:54:11.392997 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.457534 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.457605 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.457623 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.457649 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.457665 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:11Z","lastTransitionTime":"2025-11-28T11:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.561179 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.561233 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.561253 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.561302 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.561319 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:11Z","lastTransitionTime":"2025-11-28T11:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.664953 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.665062 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.665082 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.665176 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.665254 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:11Z","lastTransitionTime":"2025-11-28T11:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.769418 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.769497 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.769522 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.769547 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.769564 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:11Z","lastTransitionTime":"2025-11-28T11:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.872482 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.872521 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.872529 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.872545 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.872556 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:11Z","lastTransitionTime":"2025-11-28T11:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.975065 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.975099 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.975108 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.975149 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:11 crc kubenswrapper[5030]: I1128 11:54:11.975163 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:11Z","lastTransitionTime":"2025-11-28T11:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.078649 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.078707 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.078723 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.078749 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.078770 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:12Z","lastTransitionTime":"2025-11-28T11:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.181863 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.181934 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.181951 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.181976 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.181993 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:12Z","lastTransitionTime":"2025-11-28T11:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.285668 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.285716 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.285729 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.285756 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.285776 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:12Z","lastTransitionTime":"2025-11-28T11:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.388664 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.388720 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.388742 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.388770 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.388789 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:12Z","lastTransitionTime":"2025-11-28T11:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.392671 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:12 crc kubenswrapper[5030]: E1128 11:54:12.392771 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.392772 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:12 crc kubenswrapper[5030]: E1128 11:54:12.393062 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.393130 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:12 crc kubenswrapper[5030]: E1128 11:54:12.393192 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.410684 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.432571 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7589f5a1f3ffa2039e76ad57648413ed1c1a7b0047e023696616bf1ac679be7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:54:08Z\\\",\\\"message\\\":\\\"2025-11-28T11:53:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eea0ce15-f0ed-4f25-8e82-7eb04deee8c7\\\\n2025-11-28T11:53:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eea0ce15-f0ed-4f25-8e82-7eb04deee8c7 to /host/opt/cni/bin/\\\\n2025-11-28T11:53:22Z [verbose] multus-daemon started\\\\n2025-11-28T11:53:22Z [verbose] Readiness Indicator file check\\\\n2025-11-28T11:54:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.459778 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.478191 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.491240 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.491321 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.491345 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.491380 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.491403 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:12Z","lastTransitionTime":"2025-11-28T11:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.493769 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.511790 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.544271 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:47Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.401965 6719 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.401993 6719 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403130 6719 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403424 6719 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403845 6719 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 11:53:47.404102 6719 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.405032 6719 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.405169 6719 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.557531 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.570918 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zg94c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a047de37-e5fb-49f1-8b34-94c084894e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zg94c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.583721 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3519649d-fbcc-44c1-844a-a583187adfe4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15347ebd6790bbea101cf7c1648c4dca835235e58135b355c07606ec6c449ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29e78db6cc2e04a56ec70a310fda7bce1ca32eb00ff65221b3eef96fac81afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29e78db6cc2e04a56ec70a310fda7bce1ca32eb00ff65221b3eef96fac81afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.594625 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.594658 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.594667 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.594681 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.594691 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:12Z","lastTransitionTime":"2025-11-28T11:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.597839 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.623096 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.641185 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.657191 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ee8a59-861f-45a9-899b-a14b271beeec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4941837db92a86711049d8127c0c54d85666d4657fd632275b753d6cf824402a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e0ee0c11239d02d532be8f53740151a5473ce01cfeff9bfd74d14fd2f23e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://115d1d02ee85fac531c03ead7408d14eee3d97a5ded22b9c667d533ab91d5a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.686509 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.698190 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.698448 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.698844 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.699043 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.699256 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:12Z","lastTransitionTime":"2025-11-28T11:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.704358 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e157b8267fdc717cd296285288fb417fc468eab880eb1c4ed7a825434b5fc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4949e1c306f6dcea662ddb9fa5a17acb42cac5744c7c60c87eee9457a6793c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.730880 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.745301 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.765864 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:12Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.801657 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.802024 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.802087 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.802151 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.802220 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:12Z","lastTransitionTime":"2025-11-28T11:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.906075 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.906158 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.906183 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.906214 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:12 crc kubenswrapper[5030]: I1128 11:54:12.906238 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:12Z","lastTransitionTime":"2025-11-28T11:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.008565 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.009015 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.009222 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.009593 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.009777 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:13Z","lastTransitionTime":"2025-11-28T11:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.112633 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.113026 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.113174 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.113341 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.113587 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:13Z","lastTransitionTime":"2025-11-28T11:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.217292 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.217341 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.217357 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.217379 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.217398 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:13Z","lastTransitionTime":"2025-11-28T11:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.321830 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.322125 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.322222 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.322330 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.322417 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:13Z","lastTransitionTime":"2025-11-28T11:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.392989 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:13 crc kubenswrapper[5030]: E1128 11:54:13.393197 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.395099 5030 scope.go:117] "RemoveContainer" containerID="14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.425775 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.425810 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.425821 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.425836 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.425848 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:13Z","lastTransitionTime":"2025-11-28T11:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.529146 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.529211 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.529234 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.529265 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.529287 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:13Z","lastTransitionTime":"2025-11-28T11:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.631671 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.631728 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.631742 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.631760 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.631776 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:13Z","lastTransitionTime":"2025-11-28T11:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.734797 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.734872 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.734891 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.734918 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.734940 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:13Z","lastTransitionTime":"2025-11-28T11:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.855087 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.855128 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.855141 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.855157 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.855172 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:13Z","lastTransitionTime":"2025-11-28T11:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.957173 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.957240 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.957261 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.957291 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:13 crc kubenswrapper[5030]: I1128 11:54:13.957316 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:13Z","lastTransitionTime":"2025-11-28T11:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.070035 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.070127 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.070149 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.070178 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.070196 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:14Z","lastTransitionTime":"2025-11-28T11:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.175566 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.175979 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.175991 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.176031 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.176051 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:14Z","lastTransitionTime":"2025-11-28T11:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.278781 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.278814 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.278822 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.278839 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.278849 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:14Z","lastTransitionTime":"2025-11-28T11:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.381315 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.381390 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.381409 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.381438 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.381458 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:14Z","lastTransitionTime":"2025-11-28T11:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.392799 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.392799 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.392975 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:14 crc kubenswrapper[5030]: E1128 11:54:14.393189 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:14 crc kubenswrapper[5030]: E1128 11:54:14.393338 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:14 crc kubenswrapper[5030]: E1128 11:54:14.393525 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.484796 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.484837 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.484846 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.484866 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.484875 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:14Z","lastTransitionTime":"2025-11-28T11:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.587886 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.587946 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.587961 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.587985 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.588002 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:14Z","lastTransitionTime":"2025-11-28T11:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.690777 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.690823 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.690838 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.690858 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.690874 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:14Z","lastTransitionTime":"2025-11-28T11:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.793688 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.793762 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.793775 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.793796 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.793810 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:14Z","lastTransitionTime":"2025-11-28T11:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.896430 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.896493 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.896504 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.896530 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.896543 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:14Z","lastTransitionTime":"2025-11-28T11:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.935050 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovnkube-controller/2.log" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.940685 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerStarted","Data":"7c83a86b6d8245c06d7b2c89bb2566f93b9b510fe447390ef3c98a1fa16e1116"} Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.941643 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.972519 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.998912 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.998959 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.998969 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.998989 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:14 crc kubenswrapper[5030]: I1128 11:54:14.999006 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:14Z","lastTransitionTime":"2025-11-28T11:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.000280 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c83a86b6d8245c06d7b2c89bb2566f93b9b510fe447390ef3c98a1fa16e1116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:47Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.401965 6719 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.401993 6719 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403130 6719 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403424 6719 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403845 6719 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 11:53:47.404102 6719 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.405032 6719 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.405169 6719 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:14Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.016766 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.029750 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zg94c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a047de37-e5fb-49f1-8b34-94c084894e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zg94c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.041692 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3519649d-fbcc-44c1-844a-a583187adfe4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15347ebd6790bbea101cf7c1648c4dca835235e58135b355c07606ec6c449ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29e78db6cc2e04a56ec70a310fda7bce1ca32eb00ff65221b3eef96fac81afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29e78db6cc2e04a56ec70a310fda7bce1ca32eb00ff65221b3eef96fac81afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.056773 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.068638 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.082866 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.102175 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.102220 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.102234 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.102255 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.102269 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:15Z","lastTransitionTime":"2025-11-28T11:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.103994 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.104042 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.104054 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.104072 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.104084 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:15Z","lastTransitionTime":"2025-11-28T11:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.104301 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ee8a59-861f-45a9-899b-a14b271beeec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4941837db92a86711049d8127c0c54d85666d4657fd632275b753d6cf824402a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e0ee0c11239d02d532be8f53740151a5473ce01cfeff9bfd74d14fd2f23e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://115d1d02ee85fac531c03ead7408d14eee3d97a5ded22b9c667d533ab91d5a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: E1128 11:54:15.117136 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.120333 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.120973 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.121017 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.121026 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.121043 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.121054 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:15Z","lastTransitionTime":"2025-11-28T11:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:15 crc kubenswrapper[5030]: E1128 11:54:15.135842 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.138897 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e157b8267fdc717cd296285288fb417fc468eab880eb1c4ed7a825434b5fc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4949e1c306f6dcea662ddb9fa5a17acb42cac5744c7c60c87eee9457a6793c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.141230 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.141256 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.141268 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.141286 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.141301 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:15Z","lastTransitionTime":"2025-11-28T11:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:15 crc kubenswrapper[5030]: E1128 11:54:15.158448 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.163302 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.163367 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.163385 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.163407 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.163425 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:15Z","lastTransitionTime":"2025-11-28T11:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.171951 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: E1128 11:54:15.179518 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.188316 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.188388 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.188408 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.188439 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.188459 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:15Z","lastTransitionTime":"2025-11-28T11:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.191360 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: E1128 11:54:15.207958 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: E1128 11:54:15.208151 5030 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.209313 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.211424 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.211550 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.211566 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.211587 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.211602 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:15Z","lastTransitionTime":"2025-11-28T11:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.227210 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.243043 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7589f5a1f3ffa2039e76ad57648413ed1c1a7b0047e023696616bf1ac679be7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:54:08Z\\\",\\\"message\\\":\\\"2025-11-28T11:53:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eea0ce15-f0ed-4f25-8e82-7eb04deee8c7\\\\n2025-11-28T11:53:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eea0ce15-f0ed-4f25-8e82-7eb04deee8c7 to /host/opt/cni/bin/\\\\n2025-11-28T11:53:22Z [verbose] multus-daemon started\\\\n2025-11-28T11:53:22Z [verbose] Readiness Indicator file check\\\\n2025-11-28T11:54:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.259533 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.274559 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.288583 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.314908 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.315307 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.315409 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.315658 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.315819 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:15Z","lastTransitionTime":"2025-11-28T11:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.391988 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:15 crc kubenswrapper[5030]: E1128 11:54:15.392565 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.419515 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.419688 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.419816 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.419900 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.419981 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:15Z","lastTransitionTime":"2025-11-28T11:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.524264 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.524326 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.524345 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.524375 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.524396 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:15Z","lastTransitionTime":"2025-11-28T11:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.628278 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.628334 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.628355 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.628385 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.628408 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:15Z","lastTransitionTime":"2025-11-28T11:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.731187 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.732078 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.732275 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.732448 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.732744 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:15Z","lastTransitionTime":"2025-11-28T11:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.836131 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.836668 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.836748 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.836871 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.836968 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:15Z","lastTransitionTime":"2025-11-28T11:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.940083 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.940150 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.940166 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.940193 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.940212 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:15Z","lastTransitionTime":"2025-11-28T11:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.946111 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovnkube-controller/3.log" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.947583 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovnkube-controller/2.log" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.951615 5030 generic.go:334] "Generic (PLEG): container finished" podID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerID="7c83a86b6d8245c06d7b2c89bb2566f93b9b510fe447390ef3c98a1fa16e1116" exitCode=1 Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.951701 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerDied","Data":"7c83a86b6d8245c06d7b2c89bb2566f93b9b510fe447390ef3c98a1fa16e1116"} Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.951784 5030 scope.go:117] "RemoveContainer" containerID="14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.953972 5030 scope.go:117] "RemoveContainer" containerID="7c83a86b6d8245c06d7b2c89bb2566f93b9b510fe447390ef3c98a1fa16e1116" Nov 28 11:54:15 crc kubenswrapper[5030]: E1128 11:54:15.954326 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" Nov 28 11:54:15 crc kubenswrapper[5030]: I1128 11:54:15.982037 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.000790 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ee8a59-861f-45a9-899b-a14b271beeec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4941837db92a86711049d8127c0c54d85666d4657fd632275b753d6cf824402a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e0ee0c11239d02d532be8f53740151a5473ce01cfeff9bfd74d14fd2f23e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://115d1d02ee85fac531c03ead7408d14eee3d97a5ded22b9c667d533ab91d5a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:15Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.019323 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e157b8267fdc717cd296285288fb417fc468eab880eb1c4ed7a825434b5fc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4949e1c306f6dcea662ddb9fa5a17acb42cac5744c7c60c87eee9457a6793c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:16Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.043873 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.043957 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.043978 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.044005 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.044027 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:16Z","lastTransitionTime":"2025-11-28T11:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.048972 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:16Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.069241 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:16Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.087546 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:16Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.104203 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:16Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.129118 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7589f5a1f3ffa2039e76ad57648413ed1c1a7b0047e023696616bf1ac679be7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:54:08Z\\\",\\\"message\\\":\\\"2025-11-28T11:53:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eea0ce15-f0ed-4f25-8e82-7eb04deee8c7\\\\n2025-11-28T11:53:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eea0ce15-f0ed-4f25-8e82-7eb04deee8c7 to /host/opt/cni/bin/\\\\n2025-11-28T11:53:22Z [verbose] multus-daemon started\\\\n2025-11-28T11:53:22Z [verbose] Readiness Indicator file check\\\\n2025-11-28T11:54:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:16Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.146971 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.147008 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.147019 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.147036 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.147047 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:16Z","lastTransitionTime":"2025-11-28T11:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.152301 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:16Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.169990 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:16Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.186519 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:16Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.204838 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:16Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.238835 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c83a86b6d8245c06d7b2c89bb2566f93b9b510fe447390ef3c98a1fa16e1116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14b837944454da3f3631ffc33b9f1306deb10c28597e16114c2324362caafc10\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:53:47Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.401965 6719 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.401993 6719 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403130 6719 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403424 6719 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.403845 6719 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 11:53:47.404102 6719 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 11:53:47.405032 6719 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 11:53:47.405169 6719 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c83a86b6d8245c06d7b2c89bb2566f93b9b510fe447390ef3c98a1fa16e1116\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 11:54:15.002198 7066 obj_retry.go:551] Creating *factory.egressNode crc took: 4.132329ms\\\\nI1128 11:54:15.002245 7066 factory.go:1336] Added *v1.Node event handler 7\\\\nI1128 11:54:15.002266 7066 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:54:15.002275 7066 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:54:15.002319 7066 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1128 11:54:15.002360 7066 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 11:54:15.002429 7066 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 11:54:15.002493 7066 factory.go:656] Stopping watch factory\\\\nI1128 11:54:15.002679 7066 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1128 11:54:15.002816 7066 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1128 11:54:15.002866 7066 ovnkube.go:599] Stopped ovnkube\\\\nI1128 11:54:15.002952 7066 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 11:54:15.003044 7066 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:16Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.249568 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.249781 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.249953 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.250127 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.250276 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:16Z","lastTransitionTime":"2025-11-28T11:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.257123 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:16Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.275583 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zg94c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a047de37-e5fb-49f1-8b34-94c084894e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zg94c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:16Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.292306 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3519649d-fbcc-44c1-844a-a583187adfe4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15347ebd6790bbea101cf7c1648c4dca835235e58135b355c07606ec6c449ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29e78db6cc2e04a56ec70a310fda7bce1ca32eb00ff65221b3eef96fac81afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29e78db6cc2e04a56ec70a310fda7bce1ca32eb00ff65221b3eef96fac81afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:16Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.314548 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:16Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.337586 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:16Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.351901 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:16Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.353856 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.353912 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.353927 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.353950 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.353965 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:16Z","lastTransitionTime":"2025-11-28T11:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.392885 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:16 crc kubenswrapper[5030]: E1128 11:54:16.393062 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.393317 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:16 crc kubenswrapper[5030]: E1128 11:54:16.393388 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.393653 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:16 crc kubenswrapper[5030]: E1128 11:54:16.393724 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.457396 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.457510 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.457544 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.457576 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.457599 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:16Z","lastTransitionTime":"2025-11-28T11:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.561157 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.561219 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.561231 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.561256 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.561273 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:16Z","lastTransitionTime":"2025-11-28T11:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.664841 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.664899 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.664917 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.664942 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.664960 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:16Z","lastTransitionTime":"2025-11-28T11:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.769043 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.769094 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.769105 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.769125 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.769138 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:16Z","lastTransitionTime":"2025-11-28T11:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.872872 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.872952 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.872979 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.873009 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.873030 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:16Z","lastTransitionTime":"2025-11-28T11:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.958606 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovnkube-controller/3.log" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.964982 5030 scope.go:117] "RemoveContainer" containerID="7c83a86b6d8245c06d7b2c89bb2566f93b9b510fe447390ef3c98a1fa16e1116" Nov 28 11:54:16 crc kubenswrapper[5030]: E1128 11:54:16.965257 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.975609 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.975666 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.975684 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.975708 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.975728 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:16Z","lastTransitionTime":"2025-11-28T11:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:16 crc kubenswrapper[5030]: I1128 11:54:16.988549 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:16Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.009853 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ee8a59-861f-45a9-899b-a14b271beeec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4941837db92a86711049d8127c0c54d85666d4657fd632275b753d6cf824402a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e0ee0c11239d02d532be8f53740151a5473ce01cfeff9bfd74d14fd2f23e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://115d1d02ee85fac531c03ead7408d14eee3d97a5ded22b9c667d533ab91d5a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:17Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.033791 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e157b8267fdc717cd296285288fb417fc468eab880eb1c4ed7a825434b5fc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4949e1c306f6dcea662ddb9fa5a17acb42cac5744c7c60c87eee9457a6793c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:17Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.079270 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.079337 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.079360 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.079392 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.079412 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:17Z","lastTransitionTime":"2025-11-28T11:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.095768 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:17Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.117141 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:17Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.150578 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:17Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.177453 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:17Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.183265 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.183313 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.183338 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.183367 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.183389 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:17Z","lastTransitionTime":"2025-11-28T11:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.202153 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7589f5a1f3ffa2039e76ad57648413ed1c1a7b0047e023696616bf1ac679be7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:54:08Z\\\",\\\"message\\\":\\\"2025-11-28T11:53:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eea0ce15-f0ed-4f25-8e82-7eb04deee8c7\\\\n2025-11-28T11:53:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eea0ce15-f0ed-4f25-8e82-7eb04deee8c7 to /host/opt/cni/bin/\\\\n2025-11-28T11:53:22Z [verbose] multus-daemon started\\\\n2025-11-28T11:53:22Z [verbose] Readiness Indicator file check\\\\n2025-11-28T11:54:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:17Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.225705 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:17Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.246629 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:17Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.264408 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:17Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.286776 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.286829 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.286838 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.286874 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.286888 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:17Z","lastTransitionTime":"2025-11-28T11:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.287202 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:17Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.327000 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c83a86b6d8245c06d7b2c89bb2566f93b9b510fe447390ef3c98a1fa16e1116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c83a86b6d8245c06d7b2c89bb2566f93b9b510fe447390ef3c98a1fa16e1116\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 11:54:15.002198 7066 obj_retry.go:551] Creating *factory.egressNode crc took: 4.132329ms\\\\nI1128 11:54:15.002245 7066 factory.go:1336] Added *v1.Node event handler 7\\\\nI1128 11:54:15.002266 7066 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:54:15.002275 7066 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:54:15.002319 7066 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1128 11:54:15.002360 7066 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 11:54:15.002429 7066 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 11:54:15.002493 7066 factory.go:656] Stopping watch factory\\\\nI1128 11:54:15.002679 7066 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1128 11:54:15.002816 7066 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1128 11:54:15.002866 7066 ovnkube.go:599] Stopped ovnkube\\\\nI1128 11:54:15.002952 7066 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 11:54:15.003044 7066 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:54:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:17Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.342390 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:17Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.357716 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zg94c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a047de37-e5fb-49f1-8b34-94c084894e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zg94c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:17Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.370563 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3519649d-fbcc-44c1-844a-a583187adfe4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15347ebd6790bbea101cf7c1648c4dca835235e58135b355c07606ec6c449ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29e78db6cc2e04a56ec70a310fda7bce1ca32eb00ff65221b3eef96fac81afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29e78db6cc2e04a56ec70a310fda7bce1ca32eb00ff65221b3eef96fac81afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:17Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.385732 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:17Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.391000 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.391042 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.391054 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.391072 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.391088 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:17Z","lastTransitionTime":"2025-11-28T11:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.392172 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:17 crc kubenswrapper[5030]: E1128 11:54:17.392412 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.403840 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:17Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.419964 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:17Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.494131 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.494193 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.494210 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.494236 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.494253 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:17Z","lastTransitionTime":"2025-11-28T11:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.596867 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.596975 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.596993 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.597014 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.597031 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:17Z","lastTransitionTime":"2025-11-28T11:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.700494 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.700574 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.700596 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.700626 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.700649 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:17Z","lastTransitionTime":"2025-11-28T11:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.804156 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.804216 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.804240 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.804269 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.804291 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:17Z","lastTransitionTime":"2025-11-28T11:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.907657 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.907723 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.907741 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.907765 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:17 crc kubenswrapper[5030]: I1128 11:54:17.907795 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:17Z","lastTransitionTime":"2025-11-28T11:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.011229 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.011315 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.011341 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.011373 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.011400 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:18Z","lastTransitionTime":"2025-11-28T11:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.114316 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.114390 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.114409 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.114434 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.114451 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:18Z","lastTransitionTime":"2025-11-28T11:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.217703 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.217750 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.217768 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.217794 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.217811 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:18Z","lastTransitionTime":"2025-11-28T11:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.321007 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.321095 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.321112 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.321556 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.321611 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:18Z","lastTransitionTime":"2025-11-28T11:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.392636 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.392688 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:18 crc kubenswrapper[5030]: E1128 11:54:18.392829 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.392841 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:18 crc kubenswrapper[5030]: E1128 11:54:18.393324 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:18 crc kubenswrapper[5030]: E1128 11:54:18.393223 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.424504 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.424548 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.424568 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.424590 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.424607 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:18Z","lastTransitionTime":"2025-11-28T11:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.527914 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.527999 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.528024 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.528053 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.528074 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:18Z","lastTransitionTime":"2025-11-28T11:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.631584 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.631653 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.631674 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.631700 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.631718 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:18Z","lastTransitionTime":"2025-11-28T11:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.734952 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.735138 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.735161 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.735186 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.735202 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:18Z","lastTransitionTime":"2025-11-28T11:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.837963 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.838109 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.838130 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.838155 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.838173 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:18Z","lastTransitionTime":"2025-11-28T11:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.941899 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.941976 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.941998 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.942024 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:18 crc kubenswrapper[5030]: I1128 11:54:18.942044 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:18Z","lastTransitionTime":"2025-11-28T11:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.045499 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.045572 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.045592 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.045618 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.045638 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:19Z","lastTransitionTime":"2025-11-28T11:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.148500 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.148563 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.148581 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.148604 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.148622 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:19Z","lastTransitionTime":"2025-11-28T11:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.252386 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.252495 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.252523 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.252556 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.252573 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:19Z","lastTransitionTime":"2025-11-28T11:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.356130 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.356201 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.356232 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.356267 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.356290 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:19Z","lastTransitionTime":"2025-11-28T11:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.392704 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:19 crc kubenswrapper[5030]: E1128 11:54:19.392900 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.459059 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.459127 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.459137 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.459170 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.459187 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:19Z","lastTransitionTime":"2025-11-28T11:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.562842 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.562924 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.562942 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.562971 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.562996 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:19Z","lastTransitionTime":"2025-11-28T11:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.667129 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.667207 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.667226 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.667261 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.667289 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:19Z","lastTransitionTime":"2025-11-28T11:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.770894 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.770936 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.770947 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.770965 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.770978 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:19Z","lastTransitionTime":"2025-11-28T11:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.874302 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.874380 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.874399 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.874429 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.874448 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:19Z","lastTransitionTime":"2025-11-28T11:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.977795 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.977885 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.977911 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.977943 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:19 crc kubenswrapper[5030]: I1128 11:54:19.977970 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:19Z","lastTransitionTime":"2025-11-28T11:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.082162 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.082254 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.082277 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.082308 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.082329 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:20Z","lastTransitionTime":"2025-11-28T11:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.186370 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.186443 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.186460 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.186518 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.186542 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:20Z","lastTransitionTime":"2025-11-28T11:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.290203 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.290286 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.290321 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.290355 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.290378 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:20Z","lastTransitionTime":"2025-11-28T11:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.391998 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.391998 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:20 crc kubenswrapper[5030]: E1128 11:54:20.392214 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.392459 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:20 crc kubenswrapper[5030]: E1128 11:54:20.392708 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:20 crc kubenswrapper[5030]: E1128 11:54:20.392957 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.394167 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.394211 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.394224 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.394243 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.394255 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:20Z","lastTransitionTime":"2025-11-28T11:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.497683 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.497763 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.497781 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.497809 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.497827 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:20Z","lastTransitionTime":"2025-11-28T11:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.603448 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.603571 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.603590 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.603622 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.603652 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:20Z","lastTransitionTime":"2025-11-28T11:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.706874 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.706965 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.706991 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.707041 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.707065 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:20Z","lastTransitionTime":"2025-11-28T11:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.810293 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.810360 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.810378 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.810406 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.810427 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:20Z","lastTransitionTime":"2025-11-28T11:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.914410 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.914527 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.914554 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.914601 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:20 crc kubenswrapper[5030]: I1128 11:54:20.914634 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:20Z","lastTransitionTime":"2025-11-28T11:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.017753 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.017831 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.017855 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.017889 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.017969 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:21Z","lastTransitionTime":"2025-11-28T11:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.121504 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.121586 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.121611 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.121640 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.121657 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:21Z","lastTransitionTime":"2025-11-28T11:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.224119 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.224179 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.224196 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.224222 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.224241 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:21Z","lastTransitionTime":"2025-11-28T11:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.327784 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.327852 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.327880 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.327917 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.327940 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:21Z","lastTransitionTime":"2025-11-28T11:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.392819 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:21 crc kubenswrapper[5030]: E1128 11:54:21.393033 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.431530 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.431597 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.431615 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.431638 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.431657 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:21Z","lastTransitionTime":"2025-11-28T11:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.534225 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.534279 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.534295 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.534319 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.534336 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:21Z","lastTransitionTime":"2025-11-28T11:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.637845 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.637906 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.637919 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.637942 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.637963 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:21Z","lastTransitionTime":"2025-11-28T11:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.741596 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.741649 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.741659 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.741680 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.741692 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:21Z","lastTransitionTime":"2025-11-28T11:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.844964 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.845021 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.845037 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.845060 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.845079 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:21Z","lastTransitionTime":"2025-11-28T11:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.948005 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.948071 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.948095 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.948128 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:21 crc kubenswrapper[5030]: I1128 11:54:21.948153 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:21Z","lastTransitionTime":"2025-11-28T11:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.051580 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.051617 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.051628 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.051641 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.051651 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:22Z","lastTransitionTime":"2025-11-28T11:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.154261 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.154324 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.154342 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.154368 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.154388 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:22Z","lastTransitionTime":"2025-11-28T11:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.257220 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.257296 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.257319 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.257350 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.257373 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:22Z","lastTransitionTime":"2025-11-28T11:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.368337 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.368401 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.368418 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.368442 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.368454 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:22Z","lastTransitionTime":"2025-11-28T11:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.392580 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.392732 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:22 crc kubenswrapper[5030]: E1128 11:54:22.392945 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.393047 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:22 crc kubenswrapper[5030]: E1128 11:54:22.393196 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:22 crc kubenswrapper[5030]: E1128 11:54:22.393723 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.417438 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.436712 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ee8a59-861f-45a9-899b-a14b271beeec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4941837db92a86711049d8127c0c54d85666d4657fd632275b753d6cf824402a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e0ee0c11239d02d532be8f53740151a5473ce01cfeff9bfd74d14fd2f23e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://115d1d02ee85fac531c03ead7408d14eee3d97a5ded22b9c667d533ab91d5a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.455575 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e157b8267fdc717cd296285288fb417fc468eab880eb1c4ed7a825434b5fc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4949e1c306f6dcea662ddb9fa5a17acb42cac5744c7c60c87eee9457a6793c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.470940 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.471002 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.471027 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.471058 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.471079 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:22Z","lastTransitionTime":"2025-11-28T11:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.482867 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.499813 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.519393 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.537323 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.551867 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7589f5a1f3ffa2039e76ad57648413ed1c1a7b0047e023696616bf1ac679be7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:54:08Z\\\",\\\"message\\\":\\\"2025-11-28T11:53:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eea0ce15-f0ed-4f25-8e82-7eb04deee8c7\\\\n2025-11-28T11:53:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eea0ce15-f0ed-4f25-8e82-7eb04deee8c7 to /host/opt/cni/bin/\\\\n2025-11-28T11:53:22Z [verbose] multus-daemon started\\\\n2025-11-28T11:53:22Z [verbose] Readiness Indicator file check\\\\n2025-11-28T11:54:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.568460 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.573536 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.573609 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.573631 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.573663 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.573687 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:22Z","lastTransitionTime":"2025-11-28T11:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.581602 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.598011 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.615657 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.645903 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c83a86b6d8245c06d7b2c89bb2566f93b9b510fe447390ef3c98a1fa16e1116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c83a86b6d8245c06d7b2c89bb2566f93b9b510fe447390ef3c98a1fa16e1116\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 11:54:15.002198 7066 obj_retry.go:551] Creating *factory.egressNode crc took: 4.132329ms\\\\nI1128 11:54:15.002245 7066 factory.go:1336] Added *v1.Node event handler 7\\\\nI1128 11:54:15.002266 7066 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:54:15.002275 7066 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:54:15.002319 7066 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1128 11:54:15.002360 7066 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 11:54:15.002429 7066 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 11:54:15.002493 7066 factory.go:656] Stopping watch factory\\\\nI1128 11:54:15.002679 7066 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1128 11:54:15.002816 7066 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1128 11:54:15.002866 7066 ovnkube.go:599] Stopped ovnkube\\\\nI1128 11:54:15.002952 7066 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 11:54:15.003044 7066 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:54:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.665090 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.676559 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.676637 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.676655 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.676685 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.676702 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:22Z","lastTransitionTime":"2025-11-28T11:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.682450 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zg94c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a047de37-e5fb-49f1-8b34-94c084894e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zg94c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.700010 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3519649d-fbcc-44c1-844a-a583187adfe4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15347ebd6790bbea101cf7c1648c4dca835235e58135b355c07606ec6c449ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29e78db6cc2e04a56ec70a310fda7bce1ca32eb00ff65221b3eef96fac81afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29e78db6cc2e04a56ec70a310fda7bce1ca32eb00ff65221b3eef96fac81afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.723534 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.746052 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.765209 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:22Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.780073 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.780132 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.780144 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.780166 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.780179 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:22Z","lastTransitionTime":"2025-11-28T11:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.883176 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.883233 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.883246 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.883270 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.883286 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:22Z","lastTransitionTime":"2025-11-28T11:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.986717 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.986765 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.986780 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.986803 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:22 crc kubenswrapper[5030]: I1128 11:54:22.986820 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:22Z","lastTransitionTime":"2025-11-28T11:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.090149 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.090210 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.090223 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.090246 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.090261 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:23Z","lastTransitionTime":"2025-11-28T11:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.194917 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.195007 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.195026 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.195057 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.195083 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:23Z","lastTransitionTime":"2025-11-28T11:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.299393 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.299510 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.299530 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.299560 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.299579 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:23Z","lastTransitionTime":"2025-11-28T11:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.392836 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:23 crc kubenswrapper[5030]: E1128 11:54:23.393065 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.402072 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.402126 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.402150 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.402186 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.402211 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:23Z","lastTransitionTime":"2025-11-28T11:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.505844 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.505908 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.505925 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.505948 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.505966 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:23Z","lastTransitionTime":"2025-11-28T11:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.610349 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.610402 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.610419 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.610444 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.610494 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:23Z","lastTransitionTime":"2025-11-28T11:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.713644 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.713707 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.713721 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.713752 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.713781 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:23Z","lastTransitionTime":"2025-11-28T11:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.816526 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.816794 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.816821 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.816851 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.816870 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:23Z","lastTransitionTime":"2025-11-28T11:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.919899 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.919957 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.919974 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.919999 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:23 crc kubenswrapper[5030]: I1128 11:54:23.920018 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:23Z","lastTransitionTime":"2025-11-28T11:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.022972 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.023058 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.023078 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.023102 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.023126 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:24Z","lastTransitionTime":"2025-11-28T11:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.126763 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.126837 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.126858 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.126890 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.126912 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:24Z","lastTransitionTime":"2025-11-28T11:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.211382 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:54:24 crc kubenswrapper[5030]: E1128 11:54:24.211619 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:28.211576699 +0000 UTC m=+146.153319422 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.230493 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.230554 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.230575 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.230601 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.230619 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:24Z","lastTransitionTime":"2025-11-28T11:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.312931 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.313014 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.313051 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.313134 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:24 crc kubenswrapper[5030]: E1128 11:54:24.313197 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:54:24 crc kubenswrapper[5030]: E1128 11:54:24.313233 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:54:24 crc kubenswrapper[5030]: E1128 11:54:24.313255 5030 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:54:24 crc kubenswrapper[5030]: E1128 11:54:24.313325 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 11:55:28.313301464 +0000 UTC m=+146.255044177 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:54:24 crc kubenswrapper[5030]: E1128 11:54:24.313197 5030 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:54:24 crc kubenswrapper[5030]: E1128 11:54:24.313340 5030 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:54:24 crc kubenswrapper[5030]: E1128 11:54:24.313430 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:55:28.313400456 +0000 UTC m=+146.255143179 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 11:54:24 crc kubenswrapper[5030]: E1128 11:54:24.313358 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 11:54:24 crc kubenswrapper[5030]: E1128 11:54:24.313514 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 11:55:28.313450898 +0000 UTC m=+146.255193751 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 11:54:24 crc kubenswrapper[5030]: E1128 11:54:24.313519 5030 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 11:54:24 crc kubenswrapper[5030]: E1128 11:54:24.313555 5030 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:54:24 crc kubenswrapper[5030]: E1128 11:54:24.313632 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 11:55:28.313605042 +0000 UTC m=+146.255347755 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.335977 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.336030 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.336048 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.336071 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.336087 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:24Z","lastTransitionTime":"2025-11-28T11:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.391898 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.392231 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:24 crc kubenswrapper[5030]: E1128 11:54:24.392209 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.392336 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:24 crc kubenswrapper[5030]: E1128 11:54:24.392743 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:24 crc kubenswrapper[5030]: E1128 11:54:24.392940 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.439019 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.439083 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.439103 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.439125 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.439145 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:24Z","lastTransitionTime":"2025-11-28T11:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.542268 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.542336 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.542360 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.542394 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.542417 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:24Z","lastTransitionTime":"2025-11-28T11:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.645982 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.646077 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.646100 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.646129 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.646150 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:24Z","lastTransitionTime":"2025-11-28T11:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.749618 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.749673 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.749691 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.749715 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.749733 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:24Z","lastTransitionTime":"2025-11-28T11:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.853017 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.853074 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.853095 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.853120 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.853140 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:24Z","lastTransitionTime":"2025-11-28T11:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.956387 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.956451 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.956494 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.956522 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:24 crc kubenswrapper[5030]: I1128 11:54:24.956540 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:24Z","lastTransitionTime":"2025-11-28T11:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.059520 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.059605 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.059627 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.059658 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.059681 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:25Z","lastTransitionTime":"2025-11-28T11:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.163371 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.163499 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.163528 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.163627 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.163656 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:25Z","lastTransitionTime":"2025-11-28T11:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.267150 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.267299 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.267318 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.267348 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.267806 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:25Z","lastTransitionTime":"2025-11-28T11:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.371090 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.371158 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.371180 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.371210 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.371231 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:25Z","lastTransitionTime":"2025-11-28T11:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.392797 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:25 crc kubenswrapper[5030]: E1128 11:54:25.393077 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.473905 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.473958 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.473976 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.473996 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.474015 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:25Z","lastTransitionTime":"2025-11-28T11:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.540192 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.540267 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.540290 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.540321 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.540342 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:25Z","lastTransitionTime":"2025-11-28T11:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:25 crc kubenswrapper[5030]: E1128 11:54:25.562801 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:25Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.572205 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.572309 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.572338 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.572374 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.572412 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:25Z","lastTransitionTime":"2025-11-28T11:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:25 crc kubenswrapper[5030]: E1128 11:54:25.596091 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:25Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.602189 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.602245 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.602264 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.602288 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.602305 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:25Z","lastTransitionTime":"2025-11-28T11:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:25 crc kubenswrapper[5030]: E1128 11:54:25.624643 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:25Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.630343 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.630400 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.630418 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.630440 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.630458 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:25Z","lastTransitionTime":"2025-11-28T11:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:25 crc kubenswrapper[5030]: E1128 11:54:25.651170 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:25Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.655978 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.656018 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.656030 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.656049 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.656061 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:25Z","lastTransitionTime":"2025-11-28T11:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:25 crc kubenswrapper[5030]: E1128 11:54:25.671121 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:25Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:25 crc kubenswrapper[5030]: E1128 11:54:25.671269 5030 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.673406 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.673438 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.673449 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.673494 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.673508 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:25Z","lastTransitionTime":"2025-11-28T11:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.777054 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.777142 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.777186 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.777222 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.777242 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:25Z","lastTransitionTime":"2025-11-28T11:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.880780 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.880847 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.880868 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.880898 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.880920 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:25Z","lastTransitionTime":"2025-11-28T11:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.984141 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.984202 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.984220 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.984249 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:25 crc kubenswrapper[5030]: I1128 11:54:25.984273 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:25Z","lastTransitionTime":"2025-11-28T11:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.087862 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.087943 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.087968 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.088002 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.088027 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:26Z","lastTransitionTime":"2025-11-28T11:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.191601 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.191744 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.191773 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.191807 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.191832 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:26Z","lastTransitionTime":"2025-11-28T11:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.294975 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.295063 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.295082 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.295111 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.295130 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:26Z","lastTransitionTime":"2025-11-28T11:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.392704 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.392705 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.392886 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:26 crc kubenswrapper[5030]: E1128 11:54:26.393086 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:26 crc kubenswrapper[5030]: E1128 11:54:26.393228 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:26 crc kubenswrapper[5030]: E1128 11:54:26.393378 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.403947 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.404017 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.404040 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.404069 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.404092 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:26Z","lastTransitionTime":"2025-11-28T11:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.507794 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.507849 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.507867 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.507890 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.507906 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:26Z","lastTransitionTime":"2025-11-28T11:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.611655 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.611710 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.611720 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.611739 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.611751 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:26Z","lastTransitionTime":"2025-11-28T11:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.715234 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.715335 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.715358 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.715391 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.715416 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:26Z","lastTransitionTime":"2025-11-28T11:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.819970 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.820047 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.820276 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.820308 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.820325 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:26Z","lastTransitionTime":"2025-11-28T11:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.925060 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.925616 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.925638 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.925666 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:26 crc kubenswrapper[5030]: I1128 11:54:26.925685 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:26Z","lastTransitionTime":"2025-11-28T11:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.028838 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.028913 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.028935 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.028962 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.028981 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:27Z","lastTransitionTime":"2025-11-28T11:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.132352 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.132446 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.132508 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.132560 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.132585 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:27Z","lastTransitionTime":"2025-11-28T11:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.236860 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.236940 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.236958 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.236987 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.237007 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:27Z","lastTransitionTime":"2025-11-28T11:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.341973 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.342087 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.342108 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.342139 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.342165 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:27Z","lastTransitionTime":"2025-11-28T11:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.392727 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:27 crc kubenswrapper[5030]: E1128 11:54:27.392996 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.447171 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.447237 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.447261 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.447290 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.447311 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:27Z","lastTransitionTime":"2025-11-28T11:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.550731 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.550808 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.550832 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.550863 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.550886 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:27Z","lastTransitionTime":"2025-11-28T11:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.654412 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.654531 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.654555 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.654589 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.654611 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:27Z","lastTransitionTime":"2025-11-28T11:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.758039 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.758125 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.758146 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.758182 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.758204 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:27Z","lastTransitionTime":"2025-11-28T11:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.861585 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.861697 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.861730 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.861771 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.861799 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:27Z","lastTransitionTime":"2025-11-28T11:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.965351 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.965417 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.965440 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.965562 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:27 crc kubenswrapper[5030]: I1128 11:54:27.965594 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:27Z","lastTransitionTime":"2025-11-28T11:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.068978 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.069038 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.069056 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.069079 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.069097 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:28Z","lastTransitionTime":"2025-11-28T11:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.172694 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.172778 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.172802 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.172835 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.172860 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:28Z","lastTransitionTime":"2025-11-28T11:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.276713 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.276769 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.276780 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.276799 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.276811 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:28Z","lastTransitionTime":"2025-11-28T11:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.379704 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.379864 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.379887 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.380256 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.380363 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:28Z","lastTransitionTime":"2025-11-28T11:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.392580 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.392598 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.392709 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:28 crc kubenswrapper[5030]: E1128 11:54:28.392883 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:28 crc kubenswrapper[5030]: E1128 11:54:28.393149 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:28 crc kubenswrapper[5030]: E1128 11:54:28.393092 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.483843 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.483921 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.483940 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.484151 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.484176 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:28Z","lastTransitionTime":"2025-11-28T11:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.587794 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.587837 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.587848 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.587865 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.587877 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:28Z","lastTransitionTime":"2025-11-28T11:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.691835 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.691906 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.691926 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.691952 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.691970 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:28Z","lastTransitionTime":"2025-11-28T11:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.795304 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.795399 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.795423 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.795456 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.795561 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:28Z","lastTransitionTime":"2025-11-28T11:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.899608 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.899684 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.899707 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.899736 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:28 crc kubenswrapper[5030]: I1128 11:54:28.899759 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:28Z","lastTransitionTime":"2025-11-28T11:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.004316 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.004388 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.004409 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.004436 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.004455 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:29Z","lastTransitionTime":"2025-11-28T11:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.108664 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.108727 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.108744 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.108773 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.108792 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:29Z","lastTransitionTime":"2025-11-28T11:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.212372 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.212434 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.212455 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.212513 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.212533 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:29Z","lastTransitionTime":"2025-11-28T11:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.316050 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.316132 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.316152 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.316183 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.316202 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:29Z","lastTransitionTime":"2025-11-28T11:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.392442 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:29 crc kubenswrapper[5030]: E1128 11:54:29.392777 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.418877 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.418935 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.418953 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.418977 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.418995 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:29Z","lastTransitionTime":"2025-11-28T11:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.521906 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.521982 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.522006 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.522038 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.522061 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:29Z","lastTransitionTime":"2025-11-28T11:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.625150 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.625227 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.625250 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.625278 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.625298 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:29Z","lastTransitionTime":"2025-11-28T11:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.728557 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.728629 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.728688 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.728716 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.728738 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:29Z","lastTransitionTime":"2025-11-28T11:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.831441 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.831520 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.831537 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.831562 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.831579 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:29Z","lastTransitionTime":"2025-11-28T11:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.935312 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.935770 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.935987 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.936140 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:29 crc kubenswrapper[5030]: I1128 11:54:29.936271 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:29Z","lastTransitionTime":"2025-11-28T11:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.039281 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.039344 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.039362 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.039389 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.039407 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:30Z","lastTransitionTime":"2025-11-28T11:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.142644 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.142714 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.142728 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.142783 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.142798 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:30Z","lastTransitionTime":"2025-11-28T11:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.246070 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.246152 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.246170 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.246199 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.246220 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:30Z","lastTransitionTime":"2025-11-28T11:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.350400 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.350546 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.350581 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.350612 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.350637 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:30Z","lastTransitionTime":"2025-11-28T11:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.392414 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.392546 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.392567 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:30 crc kubenswrapper[5030]: E1128 11:54:30.392718 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:30 crc kubenswrapper[5030]: E1128 11:54:30.393611 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:30 crc kubenswrapper[5030]: E1128 11:54:30.393754 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.394338 5030 scope.go:117] "RemoveContainer" containerID="7c83a86b6d8245c06d7b2c89bb2566f93b9b510fe447390ef3c98a1fa16e1116" Nov 28 11:54:30 crc kubenswrapper[5030]: E1128 11:54:30.394736 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.454743 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.454802 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.454824 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.454853 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.454876 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:30Z","lastTransitionTime":"2025-11-28T11:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.558877 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.558956 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.558976 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.559006 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.559028 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:30Z","lastTransitionTime":"2025-11-28T11:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.663306 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.663389 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.663406 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.663431 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.663450 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:30Z","lastTransitionTime":"2025-11-28T11:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.767184 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.767304 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.767325 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.767356 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.767376 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:30Z","lastTransitionTime":"2025-11-28T11:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.871839 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.871950 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.871976 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.872016 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.872268 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:30Z","lastTransitionTime":"2025-11-28T11:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.976406 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.976501 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.976522 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.976550 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:30 crc kubenswrapper[5030]: I1128 11:54:30.976568 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:30Z","lastTransitionTime":"2025-11-28T11:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.079949 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.080048 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.080068 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.080098 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.080120 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:31Z","lastTransitionTime":"2025-11-28T11:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.183724 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.183798 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.183823 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.183852 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.183869 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:31Z","lastTransitionTime":"2025-11-28T11:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.287963 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.288033 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.288052 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.288082 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.288103 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:31Z","lastTransitionTime":"2025-11-28T11:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.392000 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.392000 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.392399 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.392425 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:31 crc kubenswrapper[5030]: E1128 11:54:31.392435 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.392454 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.392546 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:31Z","lastTransitionTime":"2025-11-28T11:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.496204 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.496254 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.496271 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.496294 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.496311 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:31Z","lastTransitionTime":"2025-11-28T11:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.599870 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.599941 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.599972 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.599998 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.600018 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:31Z","lastTransitionTime":"2025-11-28T11:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.703400 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.703501 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.703521 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.703549 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.703584 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:31Z","lastTransitionTime":"2025-11-28T11:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.807569 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.807711 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.807739 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.807833 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.807864 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:31Z","lastTransitionTime":"2025-11-28T11:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.912226 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.912774 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.913005 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.913220 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:31 crc kubenswrapper[5030]: I1128 11:54:31.913435 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:31Z","lastTransitionTime":"2025-11-28T11:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.017777 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.018087 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.018180 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.018267 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.018346 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:32Z","lastTransitionTime":"2025-11-28T11:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.122029 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.122102 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.122120 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.122149 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.122171 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:32Z","lastTransitionTime":"2025-11-28T11:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.225630 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.225684 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.225696 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.225717 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.225730 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:32Z","lastTransitionTime":"2025-11-28T11:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.328546 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.328624 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.328643 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.328670 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.328689 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:32Z","lastTransitionTime":"2025-11-28T11:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.392537 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.392628 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.392551 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:32 crc kubenswrapper[5030]: E1128 11:54:32.392857 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:32 crc kubenswrapper[5030]: E1128 11:54:32.393212 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:32 crc kubenswrapper[5030]: E1128 11:54:32.393829 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.417307 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d8b592e-41f8-40de-b51e-6fd3cd82ddec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://295dacd494441d9923ad635928d070f0ee52f24c8540bc63de3aae494c0b7f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e50b268371e499531f345ff272f543fdd06768c0c8d8bc769b932a708ab4c42c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ae4d67b238269df28cc7dfe5b9e7e4e09132d3533b538fff04765321263a3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.434046 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.434142 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.434167 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.434204 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.434230 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:32Z","lastTransitionTime":"2025-11-28T11:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.436684 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33ee8a59-861f-45a9-899b-a14b271beeec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4941837db92a86711049d8127c0c54d85666d4657fd632275b753d6cf824402a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e0ee0c11239d02d532be8f53740151a5473ce01cfeff9bfd74d14fd2f23e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://115d1d02ee85fac531c03ead7408d14eee3d97a5ded22b9c667d533ab91d5a61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6996d7d40c46362392eb3f60da532f29d3cacef6388a18783a7df96ff7782d20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.467405 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0e41903-23e8-4fb4-9ccc-2bf6c56e255c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f52f0d8e6cbcb78271fd893263e39ec6a94f3be4ee43d3070153c7fc4c28c93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2949a5a8a3756365131d94c6358f30f2234d7733fd3ea4047fdc88e02afe289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffe18ff5f24b53495e1e225fbe41599d9d93ea0e80f28b390545d558112be384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://728d02947e3c1b05d94171522b08afe44fedaaf431cc6c5d7fbc99dd38c8f196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c94a3fa7bf5af31900f892d9feff8d4397bfbc5d4e07d13f1328b9e34c13a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa2ae19c28954a8f83010562eb7107befbbc2a3a48c82ed2f70cc6ae997be8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://badc7f7cfe21b4dd9b3c1ae4a3cadbb1ca63556044a611af072e6cc8044827ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://997f7b97dcd48895452e0b98a79e2d07df5b31f605be103e8a7147d78f12e5b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.484432 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.504117 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8cab682855cf618af11acc399dd3b98a6b5c38c518f8d3078bddf6b2525d4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.527180 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e46bfdf-4891-4bd6-8c51-3453013f5285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6a2e2cb032e9c74047c59f688caafaa78cf1b2f65779bc1d40b0f644e277e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77828a01b5bb35ba3f44d9c74b3a2adfd27b1ab0edae14377cf47296217d24cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0684b9e6c94e17a418ddfb11b140daebdbb3803d1bb1f8e7bdeb4076d3dfb8a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c6390f5af74947bb68e5b4e7416095f6d122cb6617bd1a9c919a8bdcf402c4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a353a712ac88ff39c9322222027ac3d14b9f94b712de53d9ff9930ccca9b5c8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35eade2f854750e40118c9d42faefe0f8b251d8cc5d14d078bc5b112ed70812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09b670f285f6f4528ab28ddc0ce869196cff43362e79467c18d136a66fed4a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rsx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cx2sr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.537915 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.537986 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.538006 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.538035 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.538052 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:32Z","lastTransitionTime":"2025-11-28T11:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.547040 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b961b1-b622-458f-b946-ba3b2c403918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e157b8267fdc717cd296285288fb417fc468eab880eb1c4ed7a825434b5fc40d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4949e1c306f6dcea662ddb9fa5a17acb42cac5744c7c60c87eee9457a6793c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vl82d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:34Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-25dph\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.570207 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.586761 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00cccc0917af7b3e63961be564517954bfe61a82850624b3fb87b9d8ad98581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.598174 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7w8nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb9b76b5-26c0-4a17-a384-356a8b82fed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://964dabd57e4029ec5db261c31f297167b3772e93cc85f20772bd49be71d8e145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7w8nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.613015 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8e6d4c7-9635-4925-bf75-96379201ef67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://251dbfde402fa8f1904dd213bfa5089190781aef79d42b7873739e8e5e840ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm28r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cqr62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.631052 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kfz78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee84379-3754-48c5-aaab-15dbc36caa16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7589f5a1f3ffa2039e76ad57648413ed1c1a7b0047e023696616bf1ac679be7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:54:08Z\\\",\\\"message\\\":\\\"2025-11-28T11:53:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eea0ce15-f0ed-4f25-8e82-7eb04deee8c7\\\\n2025-11-28T11:53:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eea0ce15-f0ed-4f25-8e82-7eb04deee8c7 to /host/opt/cni/bin/\\\\n2025-11-28T11:53:22Z [verbose] multus-daemon started\\\\n2025-11-28T11:53:22Z [verbose] Readiness Indicator file check\\\\n2025-11-28T11:54:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs9fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kfz78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.641707 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.641781 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.641800 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.641828 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.641851 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:32Z","lastTransitionTime":"2025-11-28T11:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.646685 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zg94c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a047de37-e5fb-49f1-8b34-94c084894e18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zlt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zg94c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.663851 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3519649d-fbcc-44c1-844a-a583187adfe4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15347ebd6790bbea101cf7c1648c4dca835235e58135b355c07606ec6c449ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29e78db6cc2e04a56ec70a310fda7bce1ca32eb00ff65221b3eef96fac81afc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a29e78db6cc2e04a56ec70a310fda7bce1ca32eb00ff65221b3eef96fac81afc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.677815 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a36cb8a-5a38-4da0-938c-fafe93f48886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 11:53:15.036647 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 11:53:15.037944 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1087227834/tls.crt::/tmp/serving-cert-1087227834/tls.key\\\\\\\"\\\\nI1128 11:53:20.369143 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 11:53:20.373110 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 11:53:20.373145 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 11:53:20.373180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 11:53:20.373191 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 11:53:20.386086 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 11:53:20.386127 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 11:53:20.386141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 11:53:20.386146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 11:53:20.386151 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 11:53:20.386156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 11:53:20.386409 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 11:53:20.388288 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.697043 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae6914d51fd67085379950283de2c779b8a7128055af37f8d70643254659d178\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b347574a8c52b41dcb3f881e0a2daeec12992e5ab7cfd4f5d0834d8e600545f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.725492 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.746167 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.746226 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.746247 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.746275 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.746297 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:32Z","lastTransitionTime":"2025-11-28T11:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.761772 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c83a86b6d8245c06d7b2c89bb2566f93b9b510fe447390ef3c98a1fa16e1116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c83a86b6d8245c06d7b2c89bb2566f93b9b510fe447390ef3c98a1fa16e1116\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T11:54:15Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 11:54:15.002198 7066 obj_retry.go:551] Creating *factory.egressNode crc took: 4.132329ms\\\\nI1128 11:54:15.002245 7066 factory.go:1336] Added *v1.Node event handler 7\\\\nI1128 11:54:15.002266 7066 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 11:54:15.002275 7066 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 11:54:15.002319 7066 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1128 11:54:15.002360 7066 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 11:54:15.002429 7066 handler.go:208] Removed *v1.Node event handler 7\\\\nI1128 11:54:15.002493 7066 factory.go:656] Stopping watch factory\\\\nI1128 11:54:15.002679 7066 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1128 11:54:15.002816 7066 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1128 11:54:15.002866 7066 ovnkube.go:599] Stopped ovnkube\\\\nI1128 11:54:15.002952 7066 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 11:54:15.003044 7066 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T11:54:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T11:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T11:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xgmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8vnfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.778735 5030 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42bsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecb0da03-4159-42f4-aa72-67c3cbbca4db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T11:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fff573369e944870cf4c9f79dd2581b40e6a544fe77078b37b875ad930ce32ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T11:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6dgbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T11:53:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42bsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:32Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.849866 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.849951 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.849970 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.849999 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.850021 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:32Z","lastTransitionTime":"2025-11-28T11:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.953393 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.953507 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.953538 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.953567 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:32 crc kubenswrapper[5030]: I1128 11:54:32.953594 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:32Z","lastTransitionTime":"2025-11-28T11:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.056153 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.056201 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.056221 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.056249 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.056267 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:33Z","lastTransitionTime":"2025-11-28T11:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.160205 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.160265 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.160281 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.160306 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.160327 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:33Z","lastTransitionTime":"2025-11-28T11:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.263291 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.263377 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.263403 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.263437 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.263459 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:33Z","lastTransitionTime":"2025-11-28T11:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.373287 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.373383 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.373428 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.373491 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.373512 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:33Z","lastTransitionTime":"2025-11-28T11:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.392041 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:33 crc kubenswrapper[5030]: E1128 11:54:33.392373 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.477186 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.477291 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.477313 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.477401 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.477428 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:33Z","lastTransitionTime":"2025-11-28T11:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.581213 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.581369 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.581441 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.581499 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.581517 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:33Z","lastTransitionTime":"2025-11-28T11:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.684116 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.684175 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.684199 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.684230 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.684251 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:33Z","lastTransitionTime":"2025-11-28T11:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.787639 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.787697 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.787710 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.787735 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.787755 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:33Z","lastTransitionTime":"2025-11-28T11:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.890912 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.890955 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.890969 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.890987 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.891001 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:33Z","lastTransitionTime":"2025-11-28T11:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.994232 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.994346 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.994415 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.994515 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:33 crc kubenswrapper[5030]: I1128 11:54:33.994546 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:33Z","lastTransitionTime":"2025-11-28T11:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.098388 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.098522 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.098553 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.098647 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.098676 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:34Z","lastTransitionTime":"2025-11-28T11:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.202754 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.202857 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.202933 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.202993 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.203011 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:34Z","lastTransitionTime":"2025-11-28T11:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.306694 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.306766 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.306787 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.306810 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.306826 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:34Z","lastTransitionTime":"2025-11-28T11:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.392759 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.392759 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.392829 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:34 crc kubenswrapper[5030]: E1128 11:54:34.393259 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:34 crc kubenswrapper[5030]: E1128 11:54:34.393367 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:34 crc kubenswrapper[5030]: E1128 11:54:34.393438 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.409786 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.409847 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.409864 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.409885 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.409904 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:34Z","lastTransitionTime":"2025-11-28T11:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.512673 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.512728 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.512744 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.512766 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.512785 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:34Z","lastTransitionTime":"2025-11-28T11:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.615549 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.615614 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.615633 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.615658 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.615675 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:34Z","lastTransitionTime":"2025-11-28T11:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.718253 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.718375 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.718395 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.718429 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.718461 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:34Z","lastTransitionTime":"2025-11-28T11:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.822571 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.822700 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.822720 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.822751 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.822804 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:34Z","lastTransitionTime":"2025-11-28T11:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.927228 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.927324 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.927350 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.927380 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:34 crc kubenswrapper[5030]: I1128 11:54:34.927399 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:34Z","lastTransitionTime":"2025-11-28T11:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.030892 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.030950 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.030971 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.030994 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.031014 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:35Z","lastTransitionTime":"2025-11-28T11:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.134860 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.134918 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.134935 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.134959 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.134978 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:35Z","lastTransitionTime":"2025-11-28T11:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.239556 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.239622 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.239642 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.239669 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.239689 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:35Z","lastTransitionTime":"2025-11-28T11:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.344926 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.345007 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.345028 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.345065 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.345116 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:35Z","lastTransitionTime":"2025-11-28T11:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.392200 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:35 crc kubenswrapper[5030]: E1128 11:54:35.392398 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.448922 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.448996 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.449013 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.449043 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.449066 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:35Z","lastTransitionTime":"2025-11-28T11:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.551848 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.551910 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.551927 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.551952 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.551972 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:35Z","lastTransitionTime":"2025-11-28T11:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.656055 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.656140 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.656158 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.656200 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.656251 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:35Z","lastTransitionTime":"2025-11-28T11:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.760254 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.760330 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.760348 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.760378 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.760398 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:35Z","lastTransitionTime":"2025-11-28T11:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.793377 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.793447 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.793506 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.793539 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.793559 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:35Z","lastTransitionTime":"2025-11-28T11:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:35 crc kubenswrapper[5030]: E1128 11:54:35.824388 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.829577 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.829647 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.829665 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.829698 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.829716 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:35Z","lastTransitionTime":"2025-11-28T11:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:35 crc kubenswrapper[5030]: E1128 11:54:35.852571 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.857821 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.857887 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.857905 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.857983 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.858010 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:35Z","lastTransitionTime":"2025-11-28T11:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:35 crc kubenswrapper[5030]: E1128 11:54:35.882573 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.888286 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.888338 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.888358 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.888385 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.888408 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:35Z","lastTransitionTime":"2025-11-28T11:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:35 crc kubenswrapper[5030]: E1128 11:54:35.912795 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.918813 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.918877 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.918895 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.918921 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.918940 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:35Z","lastTransitionTime":"2025-11-28T11:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:35 crc kubenswrapper[5030]: E1128 11:54:35.942751 5030 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T11:54:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b6cd5630-9e21-4ec4-bd29-727ed3f2d5f0\\\",\\\"systemUUID\\\":\\\"c965c05c-761f-4745-b234-194f03087472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T11:54:35Z is after 2025-08-24T17:21:41Z" Nov 28 11:54:35 crc kubenswrapper[5030]: E1128 11:54:35.942998 5030 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.946013 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.946079 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.946092 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.946120 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:35 crc kubenswrapper[5030]: I1128 11:54:35.946138 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:35Z","lastTransitionTime":"2025-11-28T11:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.049058 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.049122 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.049138 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.049161 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.049178 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:36Z","lastTransitionTime":"2025-11-28T11:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.152499 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.152529 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.152538 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.152553 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.152563 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:36Z","lastTransitionTime":"2025-11-28T11:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.256175 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.256255 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.256570 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.256651 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.256678 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:36Z","lastTransitionTime":"2025-11-28T11:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.359918 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.359971 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.359988 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.360008 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.360025 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:36Z","lastTransitionTime":"2025-11-28T11:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.392913 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.392967 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.393006 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:36 crc kubenswrapper[5030]: E1128 11:54:36.393147 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:36 crc kubenswrapper[5030]: E1128 11:54:36.393229 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:36 crc kubenswrapper[5030]: E1128 11:54:36.393285 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.463419 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.463489 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.463500 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.463522 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.463534 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:36Z","lastTransitionTime":"2025-11-28T11:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.566525 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.566590 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.566606 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.566629 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.566645 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:36Z","lastTransitionTime":"2025-11-28T11:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.671039 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.671100 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.671112 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.671137 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.671152 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:36Z","lastTransitionTime":"2025-11-28T11:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.774936 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.775039 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.775059 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.775117 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.775137 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:36Z","lastTransitionTime":"2025-11-28T11:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.878799 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.878854 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.878876 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.878939 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.878960 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:36Z","lastTransitionTime":"2025-11-28T11:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.982025 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.982080 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.982097 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.982125 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:36 crc kubenswrapper[5030]: I1128 11:54:36.982146 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:36Z","lastTransitionTime":"2025-11-28T11:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.085976 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.086024 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.086033 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.086055 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.086066 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:37Z","lastTransitionTime":"2025-11-28T11:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.189417 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.189486 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.189497 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.189522 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.189534 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:37Z","lastTransitionTime":"2025-11-28T11:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.293313 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.293388 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.293406 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.293433 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.293453 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:37Z","lastTransitionTime":"2025-11-28T11:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.392856 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:37 crc kubenswrapper[5030]: E1128 11:54:37.393115 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.396311 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.396356 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.396368 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.396387 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.396400 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:37Z","lastTransitionTime":"2025-11-28T11:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.499746 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.499819 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.499836 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.499861 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.499880 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:37Z","lastTransitionTime":"2025-11-28T11:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.913823 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.913973 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.914427 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.914513 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:37 crc kubenswrapper[5030]: I1128 11:54:37.914564 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:37Z","lastTransitionTime":"2025-11-28T11:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.019350 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.019395 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.019406 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.019460 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.019488 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:38Z","lastTransitionTime":"2025-11-28T11:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.122613 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.122712 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.122731 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.122757 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.122807 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:38Z","lastTransitionTime":"2025-11-28T11:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.226891 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.226974 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.226998 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.227050 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.227077 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:38Z","lastTransitionTime":"2025-11-28T11:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.330007 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.330065 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.330077 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.330099 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.330112 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:38Z","lastTransitionTime":"2025-11-28T11:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.392804 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.392818 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.393566 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:38 crc kubenswrapper[5030]: E1128 11:54:38.393644 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:38 crc kubenswrapper[5030]: E1128 11:54:38.393816 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:38 crc kubenswrapper[5030]: E1128 11:54:38.393913 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.434287 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.434382 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.434427 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.434450 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.434511 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:38Z","lastTransitionTime":"2025-11-28T11:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.538196 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.538298 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.538320 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.538359 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.538382 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:38Z","lastTransitionTime":"2025-11-28T11:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.642929 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.643011 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.643034 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.643067 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.643085 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:38Z","lastTransitionTime":"2025-11-28T11:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.746923 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.746992 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.747012 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.747041 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.747061 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:38Z","lastTransitionTime":"2025-11-28T11:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.851355 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.851410 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.851428 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.851455 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.851511 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:38Z","lastTransitionTime":"2025-11-28T11:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.954890 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.955010 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.955034 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.955064 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:38 crc kubenswrapper[5030]: I1128 11:54:38.955082 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:38Z","lastTransitionTime":"2025-11-28T11:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.058893 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.058954 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.058972 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.058999 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.059025 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:39Z","lastTransitionTime":"2025-11-28T11:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.161969 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.162026 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.162043 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.162066 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.162083 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:39Z","lastTransitionTime":"2025-11-28T11:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.265118 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.265161 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.265174 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.265193 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.265208 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:39Z","lastTransitionTime":"2025-11-28T11:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.368310 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.368373 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.368402 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.368433 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.368458 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:39Z","lastTransitionTime":"2025-11-28T11:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.392118 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:39 crc kubenswrapper[5030]: E1128 11:54:39.392462 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.471402 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.471460 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.471494 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.471516 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.471532 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:39Z","lastTransitionTime":"2025-11-28T11:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.574546 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.574624 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.574640 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.574669 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.574692 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:39Z","lastTransitionTime":"2025-11-28T11:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.678771 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.678851 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.678878 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.678910 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.678931 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:39Z","lastTransitionTime":"2025-11-28T11:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.738154 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs\") pod \"network-metrics-daemon-zg94c\" (UID: \"a047de37-e5fb-49f1-8b34-94c084894e18\") " pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:39 crc kubenswrapper[5030]: E1128 11:54:39.738553 5030 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:54:39 crc kubenswrapper[5030]: E1128 11:54:39.738708 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs podName:a047de37-e5fb-49f1-8b34-94c084894e18 nodeName:}" failed. No retries permitted until 2025-11-28 11:55:43.738676076 +0000 UTC m=+161.680418789 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs") pod "network-metrics-daemon-zg94c" (UID: "a047de37-e5fb-49f1-8b34-94c084894e18") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.783006 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.783088 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.783110 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.783144 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.783164 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:39Z","lastTransitionTime":"2025-11-28T11:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.886740 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.886840 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.886865 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.886904 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.886929 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:39Z","lastTransitionTime":"2025-11-28T11:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.991759 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.991885 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.991911 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.991943 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:39 crc kubenswrapper[5030]: I1128 11:54:39.991965 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:39Z","lastTransitionTime":"2025-11-28T11:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.096412 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.096519 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.096540 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.096569 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.096591 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:40Z","lastTransitionTime":"2025-11-28T11:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.200022 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.200091 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.200101 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.200123 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.200162 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:40Z","lastTransitionTime":"2025-11-28T11:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.304425 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.304498 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.304509 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.304529 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.304545 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:40Z","lastTransitionTime":"2025-11-28T11:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.393156 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.393196 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.393236 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:40 crc kubenswrapper[5030]: E1128 11:54:40.393704 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:40 crc kubenswrapper[5030]: E1128 11:54:40.393871 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:40 crc kubenswrapper[5030]: E1128 11:54:40.394024 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.407709 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.407787 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.407809 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.407834 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.407854 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:40Z","lastTransitionTime":"2025-11-28T11:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.511865 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.511950 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.511975 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.512011 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.512035 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:40Z","lastTransitionTime":"2025-11-28T11:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.616256 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.616341 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.616360 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.616387 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.616406 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:40Z","lastTransitionTime":"2025-11-28T11:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.720462 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.720584 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.720611 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.720646 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.720675 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:40Z","lastTransitionTime":"2025-11-28T11:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.824643 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.824716 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.824733 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.824760 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.824778 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:40Z","lastTransitionTime":"2025-11-28T11:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.929097 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.929163 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.929178 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.929207 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:40 crc kubenswrapper[5030]: I1128 11:54:40.929224 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:40Z","lastTransitionTime":"2025-11-28T11:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.032710 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.032780 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.032797 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.032819 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.032838 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:41Z","lastTransitionTime":"2025-11-28T11:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.135740 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.135812 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.135830 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.135859 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.135877 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:41Z","lastTransitionTime":"2025-11-28T11:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.239327 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.239400 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.239419 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.239445 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.239462 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:41Z","lastTransitionTime":"2025-11-28T11:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.343125 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.343233 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.343253 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.343276 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.343292 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:41Z","lastTransitionTime":"2025-11-28T11:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.392585 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:41 crc kubenswrapper[5030]: E1128 11:54:41.392789 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.445996 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.446064 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.446084 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.446109 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.446131 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:41Z","lastTransitionTime":"2025-11-28T11:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.548780 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.548851 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.548868 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.548897 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.548918 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:41Z","lastTransitionTime":"2025-11-28T11:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.652060 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.652137 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.652156 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.652182 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.652201 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:41Z","lastTransitionTime":"2025-11-28T11:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.755809 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.755892 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.755909 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.755934 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.755952 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:41Z","lastTransitionTime":"2025-11-28T11:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.859187 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.859249 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.859270 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.859293 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.859310 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:41Z","lastTransitionTime":"2025-11-28T11:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.962751 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.962808 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.962824 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.962880 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:41 crc kubenswrapper[5030]: I1128 11:54:41.962899 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:41Z","lastTransitionTime":"2025-11-28T11:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.065582 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.065653 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.065683 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.065715 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.065737 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:42Z","lastTransitionTime":"2025-11-28T11:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.169021 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.169073 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.169092 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.169115 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.169131 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:42Z","lastTransitionTime":"2025-11-28T11:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.272254 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.272326 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.272347 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.272377 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.272395 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:42Z","lastTransitionTime":"2025-11-28T11:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.375944 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.376037 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.376067 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.376102 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.376125 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:42Z","lastTransitionTime":"2025-11-28T11:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.392381 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.392648 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:42 crc kubenswrapper[5030]: E1128 11:54:42.392954 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.393034 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:42 crc kubenswrapper[5030]: E1128 11:54:42.393172 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:42 crc kubenswrapper[5030]: E1128 11:54:42.393305 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.394344 5030 scope.go:117] "RemoveContainer" containerID="7c83a86b6d8245c06d7b2c89bb2566f93b9b510fe447390ef3c98a1fa16e1116" Nov 28 11:54:42 crc kubenswrapper[5030]: E1128 11:54:42.394622 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8vnfr_openshift-ovn-kubernetes(44c9601c-cc85-4e79-aadd-8d20e2ea9f12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.487640 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.487746 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.487812 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.487840 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.487903 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:42Z","lastTransitionTime":"2025-11-28T11:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.487643 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-7w8nl" podStartSLOduration=82.487619376 podStartE2EDuration="1m22.487619376s" podCreationTimestamp="2025-11-28 11:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:54:42.46654269 +0000 UTC m=+100.408285413" watchObservedRunningTime="2025-11-28 11:54:42.487619376 +0000 UTC m=+100.429362089" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.513019 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podStartSLOduration=82.512979865 podStartE2EDuration="1m22.512979865s" podCreationTimestamp="2025-11-28 11:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:54:42.488972392 +0000 UTC m=+100.430715065" watchObservedRunningTime="2025-11-28 11:54:42.512979865 +0000 UTC m=+100.454722588" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.513231 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-kfz78" podStartSLOduration=81.513223372 podStartE2EDuration="1m21.513223372s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:54:42.512716328 +0000 UTC m=+100.454459011" watchObservedRunningTime="2025-11-28 11:54:42.513223372 +0000 UTC m=+100.454966095" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.545239 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-42bsd" podStartSLOduration=81.545202645 podStartE2EDuration="1m21.545202645s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:54:42.53172941 +0000 UTC m=+100.473472133" watchObservedRunningTime="2025-11-28 11:54:42.545202645 +0000 UTC m=+100.486945368" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.559233 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=34.559197834 podStartE2EDuration="34.559197834s" podCreationTimestamp="2025-11-28 11:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:54:42.558181568 +0000 UTC m=+100.499924251" watchObservedRunningTime="2025-11-28 11:54:42.559197834 +0000 UTC m=+100.500940547" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.576302 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=82.576276566 podStartE2EDuration="1m22.576276566s" podCreationTimestamp="2025-11-28 11:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:54:42.575857734 +0000 UTC m=+100.517600447" watchObservedRunningTime="2025-11-28 11:54:42.576276566 +0000 UTC m=+100.518019249" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.591793 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.592042 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.592112 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.592189 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.592254 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:42Z","lastTransitionTime":"2025-11-28T11:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.678033 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=81.677997849 podStartE2EDuration="1m21.677997849s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:54:42.677946398 +0000 UTC m=+100.619689091" watchObservedRunningTime="2025-11-28 11:54:42.677997849 +0000 UTC m=+100.619740582" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.695666 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.695746 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.695766 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.695793 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.695816 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:42Z","lastTransitionTime":"2025-11-28T11:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.697232 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=54.697206566 podStartE2EDuration="54.697206566s" podCreationTimestamp="2025-11-28 11:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:54:42.696767065 +0000 UTC m=+100.638509748" watchObservedRunningTime="2025-11-28 11:54:42.697206566 +0000 UTC m=+100.638949269" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.763116 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=81.763092394 podStartE2EDuration="1m21.763092394s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:54:42.758564215 +0000 UTC m=+100.700306938" watchObservedRunningTime="2025-11-28 11:54:42.763092394 +0000 UTC m=+100.704835077" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.798349 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.798396 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.798409 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.798427 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.798462 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:42Z","lastTransitionTime":"2025-11-28T11:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.815667 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-cx2sr" podStartSLOduration=81.815644661 podStartE2EDuration="1m21.815644661s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:54:42.813969266 +0000 UTC m=+100.755711959" watchObservedRunningTime="2025-11-28 11:54:42.815644661 +0000 UTC m=+100.757387354" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.828998 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-25dph" podStartSLOduration=80.828972542 podStartE2EDuration="1m20.828972542s" podCreationTimestamp="2025-11-28 11:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:54:42.828713966 +0000 UTC m=+100.770456659" watchObservedRunningTime="2025-11-28 11:54:42.828972542 +0000 UTC m=+100.770715225" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.901226 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.901265 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.901279 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.901298 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:42 crc kubenswrapper[5030]: I1128 11:54:42.901311 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:42Z","lastTransitionTime":"2025-11-28T11:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.003538 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.003602 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.003615 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.003639 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.003655 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:43Z","lastTransitionTime":"2025-11-28T11:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.107319 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.107371 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.107391 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.107420 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.107437 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:43Z","lastTransitionTime":"2025-11-28T11:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.211723 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.211795 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.211815 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.211840 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.211860 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:43Z","lastTransitionTime":"2025-11-28T11:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.316210 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.316278 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.316296 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.316323 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.316340 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:43Z","lastTransitionTime":"2025-11-28T11:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.392793 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:43 crc kubenswrapper[5030]: E1128 11:54:43.393008 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.419411 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.419520 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.419549 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.419579 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.419603 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:43Z","lastTransitionTime":"2025-11-28T11:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.524213 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.524275 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.524292 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.524316 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.524334 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:43Z","lastTransitionTime":"2025-11-28T11:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.627872 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.627920 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.627931 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.627952 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.627964 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:43Z","lastTransitionTime":"2025-11-28T11:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.731264 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.731299 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.731310 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.731339 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.731351 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:43Z","lastTransitionTime":"2025-11-28T11:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.834438 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.834540 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.834552 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.834576 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.834589 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:43Z","lastTransitionTime":"2025-11-28T11:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.938157 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.938213 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.938238 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.938268 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:43 crc kubenswrapper[5030]: I1128 11:54:43.938291 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:43Z","lastTransitionTime":"2025-11-28T11:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.042458 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.042573 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.042598 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.042630 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.042651 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:44Z","lastTransitionTime":"2025-11-28T11:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.146307 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.146368 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.146384 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.146534 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.146567 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:44Z","lastTransitionTime":"2025-11-28T11:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.250288 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.250359 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.250382 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.250411 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.250446 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:44Z","lastTransitionTime":"2025-11-28T11:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.354835 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.354923 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.354942 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.354972 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.354991 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:44Z","lastTransitionTime":"2025-11-28T11:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.392359 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:44 crc kubenswrapper[5030]: E1128 11:54:44.392591 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.392826 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.392893 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:44 crc kubenswrapper[5030]: E1128 11:54:44.392981 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:44 crc kubenswrapper[5030]: E1128 11:54:44.393078 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.458642 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.458723 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.458741 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.458771 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.458794 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:44Z","lastTransitionTime":"2025-11-28T11:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.562264 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.562314 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.562327 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.562347 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.562359 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:44Z","lastTransitionTime":"2025-11-28T11:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.666346 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.666440 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.666462 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.666537 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.666560 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:44Z","lastTransitionTime":"2025-11-28T11:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.769910 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.769989 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.770007 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.770037 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.770059 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:44Z","lastTransitionTime":"2025-11-28T11:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.874263 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.874332 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.874349 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.874380 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.874404 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:44Z","lastTransitionTime":"2025-11-28T11:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.977837 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.977879 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.977890 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.977909 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:44 crc kubenswrapper[5030]: I1128 11:54:44.977920 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:44Z","lastTransitionTime":"2025-11-28T11:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.081814 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.081878 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.081896 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.081920 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.081936 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:45Z","lastTransitionTime":"2025-11-28T11:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.184750 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.184819 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.184837 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.184860 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.184878 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:45Z","lastTransitionTime":"2025-11-28T11:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.287943 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.288008 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.288028 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.288053 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.288069 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:45Z","lastTransitionTime":"2025-11-28T11:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.391652 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.391738 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.391761 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.391795 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.391817 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:45Z","lastTransitionTime":"2025-11-28T11:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.391947 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:45 crc kubenswrapper[5030]: E1128 11:54:45.392142 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.494552 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.494601 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.494609 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.494624 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.494632 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:45Z","lastTransitionTime":"2025-11-28T11:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.597981 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.598235 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.598652 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.598741 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.598771 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:45Z","lastTransitionTime":"2025-11-28T11:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.702830 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.702895 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.702912 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.702936 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.702954 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:45Z","lastTransitionTime":"2025-11-28T11:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.807927 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.808079 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.808142 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.808193 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.808225 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:45Z","lastTransitionTime":"2025-11-28T11:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.911831 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.911915 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.911936 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.911961 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:45 crc kubenswrapper[5030]: I1128 11:54:45.911979 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:45Z","lastTransitionTime":"2025-11-28T11:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.015163 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.015204 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.015218 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.015237 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.015250 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:46Z","lastTransitionTime":"2025-11-28T11:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.118246 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.118324 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.118343 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.118364 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.118386 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:46Z","lastTransitionTime":"2025-11-28T11:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.196951 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.197041 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.197062 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.197090 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.197113 5030 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T11:54:46Z","lastTransitionTime":"2025-11-28T11:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.264939 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql"] Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.265571 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.273363 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.273542 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.273584 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.275114 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.319854 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59474078-63b3-45b1-8970-057ac5e5e98d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-w9rql\" (UID: \"59474078-63b3-45b1-8970-057ac5e5e98d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.319916 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/59474078-63b3-45b1-8970-057ac5e5e98d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-w9rql\" (UID: \"59474078-63b3-45b1-8970-057ac5e5e98d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.319940 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59474078-63b3-45b1-8970-057ac5e5e98d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-w9rql\" (UID: \"59474078-63b3-45b1-8970-057ac5e5e98d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.319962 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59474078-63b3-45b1-8970-057ac5e5e98d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-w9rql\" (UID: \"59474078-63b3-45b1-8970-057ac5e5e98d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.320019 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/59474078-63b3-45b1-8970-057ac5e5e98d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-w9rql\" (UID: \"59474078-63b3-45b1-8970-057ac5e5e98d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.392127 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.392172 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:46 crc kubenswrapper[5030]: E1128 11:54:46.392310 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.392741 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:46 crc kubenswrapper[5030]: E1128 11:54:46.392822 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:46 crc kubenswrapper[5030]: E1128 11:54:46.392905 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.421879 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/59474078-63b3-45b1-8970-057ac5e5e98d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-w9rql\" (UID: \"59474078-63b3-45b1-8970-057ac5e5e98d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.421992 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59474078-63b3-45b1-8970-057ac5e5e98d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-w9rql\" (UID: \"59474078-63b3-45b1-8970-057ac5e5e98d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.422057 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/59474078-63b3-45b1-8970-057ac5e5e98d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-w9rql\" (UID: \"59474078-63b3-45b1-8970-057ac5e5e98d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.422087 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/59474078-63b3-45b1-8970-057ac5e5e98d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-w9rql\" (UID: \"59474078-63b3-45b1-8970-057ac5e5e98d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.422096 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59474078-63b3-45b1-8970-057ac5e5e98d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-w9rql\" (UID: \"59474078-63b3-45b1-8970-057ac5e5e98d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.422223 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/59474078-63b3-45b1-8970-057ac5e5e98d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-w9rql\" (UID: \"59474078-63b3-45b1-8970-057ac5e5e98d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.422229 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59474078-63b3-45b1-8970-057ac5e5e98d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-w9rql\" (UID: \"59474078-63b3-45b1-8970-057ac5e5e98d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.423868 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59474078-63b3-45b1-8970-057ac5e5e98d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-w9rql\" (UID: \"59474078-63b3-45b1-8970-057ac5e5e98d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.433817 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59474078-63b3-45b1-8970-057ac5e5e98d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-w9rql\" (UID: \"59474078-63b3-45b1-8970-057ac5e5e98d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.452747 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/59474078-63b3-45b1-8970-057ac5e5e98d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-w9rql\" (UID: \"59474078-63b3-45b1-8970-057ac5e5e98d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" Nov 28 11:54:46 crc kubenswrapper[5030]: I1128 11:54:46.591494 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" Nov 28 11:54:46 crc kubenswrapper[5030]: W1128 11:54:46.612626 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59474078_63b3_45b1_8970_057ac5e5e98d.slice/crio-680af7c427a8fc17a222e38e280a09f7ec3b060746f260945cb45b550905f926 WatchSource:0}: Error finding container 680af7c427a8fc17a222e38e280a09f7ec3b060746f260945cb45b550905f926: Status 404 returned error can't find the container with id 680af7c427a8fc17a222e38e280a09f7ec3b060746f260945cb45b550905f926 Nov 28 11:54:47 crc kubenswrapper[5030]: I1128 11:54:47.090830 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" event={"ID":"59474078-63b3-45b1-8970-057ac5e5e98d","Type":"ContainerStarted","Data":"f0a6b1b8f87d78d55b98f741bfe3f180678ce71408fc0b1a65ef3a4038d84bc5"} Nov 28 11:54:47 crc kubenswrapper[5030]: I1128 11:54:47.091194 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" event={"ID":"59474078-63b3-45b1-8970-057ac5e5e98d","Type":"ContainerStarted","Data":"680af7c427a8fc17a222e38e280a09f7ec3b060746f260945cb45b550905f926"} Nov 28 11:54:47 crc kubenswrapper[5030]: I1128 11:54:47.113328 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w9rql" podStartSLOduration=87.113291691 podStartE2EDuration="1m27.113291691s" podCreationTimestamp="2025-11-28 11:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:54:47.1125259 +0000 UTC m=+105.054268623" watchObservedRunningTime="2025-11-28 11:54:47.113291691 +0000 UTC m=+105.055034404" Nov 28 11:54:47 crc kubenswrapper[5030]: I1128 11:54:47.392927 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:47 crc kubenswrapper[5030]: E1128 11:54:47.393124 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:48 crc kubenswrapper[5030]: I1128 11:54:48.392643 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:48 crc kubenswrapper[5030]: I1128 11:54:48.392750 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:48 crc kubenswrapper[5030]: I1128 11:54:48.392742 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:48 crc kubenswrapper[5030]: E1128 11:54:48.392905 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:48 crc kubenswrapper[5030]: E1128 11:54:48.393076 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:48 crc kubenswrapper[5030]: E1128 11:54:48.393181 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:49 crc kubenswrapper[5030]: I1128 11:54:49.392174 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:49 crc kubenswrapper[5030]: E1128 11:54:49.392375 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:50 crc kubenswrapper[5030]: I1128 11:54:50.392224 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:50 crc kubenswrapper[5030]: I1128 11:54:50.392317 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:50 crc kubenswrapper[5030]: I1128 11:54:50.392364 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:50 crc kubenswrapper[5030]: E1128 11:54:50.392585 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:50 crc kubenswrapper[5030]: E1128 11:54:50.392712 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:50 crc kubenswrapper[5030]: E1128 11:54:50.392832 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:51 crc kubenswrapper[5030]: I1128 11:54:51.392270 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:51 crc kubenswrapper[5030]: E1128 11:54:51.392521 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:52 crc kubenswrapper[5030]: I1128 11:54:52.393499 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:52 crc kubenswrapper[5030]: E1128 11:54:52.393643 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:52 crc kubenswrapper[5030]: I1128 11:54:52.393918 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:52 crc kubenswrapper[5030]: E1128 11:54:52.393978 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:52 crc kubenswrapper[5030]: I1128 11:54:52.394769 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:52 crc kubenswrapper[5030]: E1128 11:54:52.394832 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:53 crc kubenswrapper[5030]: I1128 11:54:53.392071 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:53 crc kubenswrapper[5030]: E1128 11:54:53.392422 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:54 crc kubenswrapper[5030]: I1128 11:54:54.392935 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:54 crc kubenswrapper[5030]: I1128 11:54:54.393006 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:54 crc kubenswrapper[5030]: I1128 11:54:54.393037 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:54 crc kubenswrapper[5030]: E1128 11:54:54.394201 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:54 crc kubenswrapper[5030]: E1128 11:54:54.394390 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:54 crc kubenswrapper[5030]: E1128 11:54:54.394504 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:55 crc kubenswrapper[5030]: I1128 11:54:55.120123 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfz78_4ee84379-3754-48c5-aaab-15dbc36caa16/kube-multus/1.log" Nov 28 11:54:55 crc kubenswrapper[5030]: I1128 11:54:55.120953 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfz78_4ee84379-3754-48c5-aaab-15dbc36caa16/kube-multus/0.log" Nov 28 11:54:55 crc kubenswrapper[5030]: I1128 11:54:55.121046 5030 generic.go:334] "Generic (PLEG): container finished" podID="4ee84379-3754-48c5-aaab-15dbc36caa16" containerID="7589f5a1f3ffa2039e76ad57648413ed1c1a7b0047e023696616bf1ac679be7e" exitCode=1 Nov 28 11:54:55 crc kubenswrapper[5030]: I1128 11:54:55.121090 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kfz78" event={"ID":"4ee84379-3754-48c5-aaab-15dbc36caa16","Type":"ContainerDied","Data":"7589f5a1f3ffa2039e76ad57648413ed1c1a7b0047e023696616bf1ac679be7e"} Nov 28 11:54:55 crc kubenswrapper[5030]: I1128 11:54:55.121139 5030 scope.go:117] "RemoveContainer" containerID="b4c028993e6501478da1b8a0ab6c86574151c5493b5f374e3789926458cea856" Nov 28 11:54:55 crc kubenswrapper[5030]: I1128 11:54:55.121781 5030 scope.go:117] "RemoveContainer" containerID="7589f5a1f3ffa2039e76ad57648413ed1c1a7b0047e023696616bf1ac679be7e" Nov 28 11:54:55 crc kubenswrapper[5030]: E1128 11:54:55.122247 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-kfz78_openshift-multus(4ee84379-3754-48c5-aaab-15dbc36caa16)\"" pod="openshift-multus/multus-kfz78" podUID="4ee84379-3754-48c5-aaab-15dbc36caa16" Nov 28 11:54:55 crc kubenswrapper[5030]: I1128 11:54:55.392806 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:55 crc kubenswrapper[5030]: E1128 11:54:55.393754 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:56 crc kubenswrapper[5030]: I1128 11:54:56.127205 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfz78_4ee84379-3754-48c5-aaab-15dbc36caa16/kube-multus/1.log" Nov 28 11:54:56 crc kubenswrapper[5030]: I1128 11:54:56.393101 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:56 crc kubenswrapper[5030]: I1128 11:54:56.393532 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:56 crc kubenswrapper[5030]: E1128 11:54:56.393677 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:56 crc kubenswrapper[5030]: I1128 11:54:56.394108 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:56 crc kubenswrapper[5030]: I1128 11:54:56.394125 5030 scope.go:117] "RemoveContainer" containerID="7c83a86b6d8245c06d7b2c89bb2566f93b9b510fe447390ef3c98a1fa16e1116" Nov 28 11:54:56 crc kubenswrapper[5030]: E1128 11:54:56.394237 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:56 crc kubenswrapper[5030]: E1128 11:54:56.394332 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:57 crc kubenswrapper[5030]: I1128 11:54:57.133647 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovnkube-controller/3.log" Nov 28 11:54:57 crc kubenswrapper[5030]: I1128 11:54:57.136962 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerStarted","Data":"5a6f6d706fba68f794de96394a58708bb284b375ac3193a214cd4f55b207d8d1"} Nov 28 11:54:57 crc kubenswrapper[5030]: I1128 11:54:57.137562 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:54:57 crc kubenswrapper[5030]: I1128 11:54:57.168773 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" podStartSLOduration=96.168737902 podStartE2EDuration="1m36.168737902s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:54:57.166885993 +0000 UTC m=+115.108628696" watchObservedRunningTime="2025-11-28 11:54:57.168737902 +0000 UTC m=+115.110480605" Nov 28 11:54:57 crc kubenswrapper[5030]: I1128 11:54:57.352797 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-zg94c"] Nov 28 11:54:57 crc kubenswrapper[5030]: I1128 11:54:57.353037 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:57 crc kubenswrapper[5030]: E1128 11:54:57.353219 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:54:58 crc kubenswrapper[5030]: I1128 11:54:58.392817 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:54:58 crc kubenswrapper[5030]: I1128 11:54:58.392921 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:54:58 crc kubenswrapper[5030]: I1128 11:54:58.392836 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:54:58 crc kubenswrapper[5030]: E1128 11:54:58.393059 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:54:58 crc kubenswrapper[5030]: E1128 11:54:58.393156 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:54:58 crc kubenswrapper[5030]: E1128 11:54:58.393254 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:54:59 crc kubenswrapper[5030]: I1128 11:54:59.392361 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:54:59 crc kubenswrapper[5030]: E1128 11:54:59.392695 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:55:00 crc kubenswrapper[5030]: I1128 11:55:00.392326 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:55:00 crc kubenswrapper[5030]: I1128 11:55:00.392384 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:55:00 crc kubenswrapper[5030]: I1128 11:55:00.392445 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:55:00 crc kubenswrapper[5030]: E1128 11:55:00.392587 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:55:00 crc kubenswrapper[5030]: E1128 11:55:00.392700 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:55:00 crc kubenswrapper[5030]: E1128 11:55:00.392879 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:55:01 crc kubenswrapper[5030]: I1128 11:55:01.392399 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:55:01 crc kubenswrapper[5030]: E1128 11:55:01.392567 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:55:02 crc kubenswrapper[5030]: E1128 11:55:02.317358 5030 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 28 11:55:02 crc kubenswrapper[5030]: I1128 11:55:02.391964 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:55:02 crc kubenswrapper[5030]: I1128 11:55:02.391994 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:55:02 crc kubenswrapper[5030]: I1128 11:55:02.392080 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:55:02 crc kubenswrapper[5030]: E1128 11:55:02.393035 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:55:02 crc kubenswrapper[5030]: E1128 11:55:02.393185 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:55:02 crc kubenswrapper[5030]: E1128 11:55:02.393309 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:55:02 crc kubenswrapper[5030]: E1128 11:55:02.470016 5030 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 11:55:03 crc kubenswrapper[5030]: I1128 11:55:03.392894 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:55:03 crc kubenswrapper[5030]: E1128 11:55:03.393087 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:55:04 crc kubenswrapper[5030]: I1128 11:55:04.392258 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:55:04 crc kubenswrapper[5030]: I1128 11:55:04.392405 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:55:04 crc kubenswrapper[5030]: E1128 11:55:04.392621 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:55:04 crc kubenswrapper[5030]: I1128 11:55:04.392662 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:55:04 crc kubenswrapper[5030]: E1128 11:55:04.392986 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:55:04 crc kubenswrapper[5030]: E1128 11:55:04.393192 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:55:05 crc kubenswrapper[5030]: I1128 11:55:05.392568 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:55:05 crc kubenswrapper[5030]: E1128 11:55:05.392900 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:55:06 crc kubenswrapper[5030]: I1128 11:55:06.392956 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:55:06 crc kubenswrapper[5030]: I1128 11:55:06.393028 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:55:06 crc kubenswrapper[5030]: I1128 11:55:06.392971 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:55:06 crc kubenswrapper[5030]: E1128 11:55:06.393184 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:55:06 crc kubenswrapper[5030]: E1128 11:55:06.393313 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:55:06 crc kubenswrapper[5030]: E1128 11:55:06.393681 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:55:07 crc kubenswrapper[5030]: I1128 11:55:07.392144 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:55:07 crc kubenswrapper[5030]: E1128 11:55:07.392357 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:55:07 crc kubenswrapper[5030]: E1128 11:55:07.471509 5030 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 11:55:08 crc kubenswrapper[5030]: I1128 11:55:08.392063 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:55:08 crc kubenswrapper[5030]: I1128 11:55:08.392063 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:55:08 crc kubenswrapper[5030]: I1128 11:55:08.392063 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:55:08 crc kubenswrapper[5030]: E1128 11:55:08.392267 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:55:08 crc kubenswrapper[5030]: E1128 11:55:08.392522 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:55:08 crc kubenswrapper[5030]: E1128 11:55:08.392455 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:55:09 crc kubenswrapper[5030]: I1128 11:55:09.392656 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:55:09 crc kubenswrapper[5030]: E1128 11:55:09.392897 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:55:10 crc kubenswrapper[5030]: I1128 11:55:10.392398 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:55:10 crc kubenswrapper[5030]: E1128 11:55:10.392730 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:55:10 crc kubenswrapper[5030]: I1128 11:55:10.392844 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:55:10 crc kubenswrapper[5030]: E1128 11:55:10.393051 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:55:10 crc kubenswrapper[5030]: I1128 11:55:10.393344 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:55:10 crc kubenswrapper[5030]: E1128 11:55:10.393498 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:55:10 crc kubenswrapper[5030]: I1128 11:55:10.393567 5030 scope.go:117] "RemoveContainer" containerID="7589f5a1f3ffa2039e76ad57648413ed1c1a7b0047e023696616bf1ac679be7e" Nov 28 11:55:11 crc kubenswrapper[5030]: I1128 11:55:11.194951 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfz78_4ee84379-3754-48c5-aaab-15dbc36caa16/kube-multus/1.log" Nov 28 11:55:11 crc kubenswrapper[5030]: I1128 11:55:11.195396 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kfz78" event={"ID":"4ee84379-3754-48c5-aaab-15dbc36caa16","Type":"ContainerStarted","Data":"018e3d90020cc03b39dc0110a6414d3de5aa9a5b4fdff14fe5f0fec5829fd973"} Nov 28 11:55:11 crc kubenswrapper[5030]: I1128 11:55:11.392832 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:55:11 crc kubenswrapper[5030]: E1128 11:55:11.393028 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zg94c" podUID="a047de37-e5fb-49f1-8b34-94c084894e18" Nov 28 11:55:12 crc kubenswrapper[5030]: I1128 11:55:12.393017 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:55:12 crc kubenswrapper[5030]: I1128 11:55:12.393108 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:55:12 crc kubenswrapper[5030]: I1128 11:55:12.395443 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:55:12 crc kubenswrapper[5030]: E1128 11:55:12.395679 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 11:55:12 crc kubenswrapper[5030]: E1128 11:55:12.395642 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 11:55:12 crc kubenswrapper[5030]: E1128 11:55:12.396128 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 11:55:13 crc kubenswrapper[5030]: I1128 11:55:13.392297 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:55:13 crc kubenswrapper[5030]: I1128 11:55:13.395505 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 11:55:13 crc kubenswrapper[5030]: I1128 11:55:13.395956 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 28 11:55:14 crc kubenswrapper[5030]: I1128 11:55:14.391990 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:55:14 crc kubenswrapper[5030]: I1128 11:55:14.392065 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:55:14 crc kubenswrapper[5030]: I1128 11:55:14.392009 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:55:14 crc kubenswrapper[5030]: I1128 11:55:14.396299 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 11:55:14 crc kubenswrapper[5030]: I1128 11:55:14.398547 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 28 11:55:14 crc kubenswrapper[5030]: I1128 11:55:14.403264 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 28 11:55:14 crc kubenswrapper[5030]: I1128 11:55:14.405289 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.887597 5030 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.945751 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr"] Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.946622 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.947140 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qv6pd"] Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.948035 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.948254 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mnz5b"] Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.949105 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.953371 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.953415 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.954333 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.956779 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.957640 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.957971 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.960526 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-s5xdv"] Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.961524 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5xdv" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.962336 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bdbjw"] Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.962702 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.962779 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.962990 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.963285 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.963415 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bdbjw" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.963922 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.964193 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.964482 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.966394 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.966783 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.967183 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.967261 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.969404 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq"] Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.970667 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.971532 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.971785 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.971993 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8"] Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.972787 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.986945 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.987772 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.989333 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.991099 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.997239 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.997378 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.998137 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 11:55:16 crc kubenswrapper[5030]: I1128 11:55:16.998354 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6gzzl"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:16.998863 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:16.999399 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.000271 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.001118 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.001223 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.003320 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.003428 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.007143 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.022203 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.022552 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.022688 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.022997 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.023125 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.023551 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.024516 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pvtql"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.025040 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-l6ggh"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.025595 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l6ggh" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.026097 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6gzzl" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.026401 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pvtql" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.027405 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.028143 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8mt2\" (UniqueName: \"kubernetes.io/projected/19e32b00-1659-4841-b343-d23e28700081-kube-api-access-w8mt2\") pod \"machine-approver-56656f9798-s5xdv\" (UID: \"19e32b00-1659-4841-b343-d23e28700081\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5xdv" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.028197 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/19e32b00-1659-4841-b343-d23e28700081-auth-proxy-config\") pod \"machine-approver-56656f9798-s5xdv\" (UID: \"19e32b00-1659-4841-b343-d23e28700081\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5xdv" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.028222 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e32b00-1659-4841-b343-d23e28700081-config\") pod \"machine-approver-56656f9798-s5xdv\" (UID: \"19e32b00-1659-4841-b343-d23e28700081\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5xdv" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.028260 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/19e32b00-1659-4841-b343-d23e28700081-machine-approver-tls\") pod \"machine-approver-56656f9798-s5xdv\" (UID: \"19e32b00-1659-4841-b343-d23e28700081\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5xdv" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.030752 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-4r4f7"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.031446 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.033276 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.035585 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.035613 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.035870 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.035985 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.036088 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.036211 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.038826 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7c24l"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.039689 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.041275 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.045483 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.045545 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.045848 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.046023 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.047570 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.047964 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.048129 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.048295 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.048506 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.048603 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.048708 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.048751 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.048827 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.049229 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.049784 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.049877 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.050145 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.050386 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.050702 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.050787 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kwjk5"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.051365 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-r2vs9"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.051770 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.051807 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kwjk5" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.052106 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.052509 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-84zsn"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.052863 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.053209 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-84zsn" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.056658 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.056665 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.053525 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.053579 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.053623 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.053680 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.055935 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.057239 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.056519 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.053447 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.056790 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.057398 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.050643 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.063605 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.063880 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.064358 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.073918 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47vf7"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.074670 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47vf7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.080454 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.081589 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zv8vs"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.098309 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.098886 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8vhfh"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.104707 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zv8vs" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.105445 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.106708 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-dz6n5"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.107758 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.107816 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.114029 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-456s8"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.115036 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.115462 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.116596 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.121562 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-frdgb"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.122684 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.122813 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-frdgb" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.126678 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mnz5b"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.126750 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6gzzl"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.148783 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lr5mq"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.149140 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-26fxd"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.149568 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26fxd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.149785 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lr5mq" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.150252 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49559462-c755-4be6-8277-c8cc20aeb0e0-client-ca\") pod \"controller-manager-879f6c89f-qv6pd\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.150271 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95d6b274-7def-4790-b0ab-bae4d0f8d6db-serving-cert\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.150289 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95d6b274-7def-4790-b0ab-bae4d0f8d6db-config\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.150312 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5c19601-52c5-40bd-8640-3fd0128e7b6a-config\") pod \"machine-api-operator-5694c8668f-bdbjw\" (UID: \"a5c19601-52c5-40bd-8640-3fd0128e7b6a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bdbjw" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.150326 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4310c2c4-4ad9-4820-abc9-09f761fa3a71-config\") pod \"route-controller-manager-6576b87f9c-77js8\" (UID: \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.150341 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5-trusted-ca\") pod \"console-operator-58897d9998-6gzzl\" (UID: \"8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5\") " pod="openshift-console-operator/console-operator-58897d9998-6gzzl" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.150358 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5-config\") pod \"console-operator-58897d9998-6gzzl\" (UID: \"8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5\") " pod="openshift-console-operator/console-operator-58897d9998-6gzzl" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.150373 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66gnk\" (UniqueName: \"kubernetes.io/projected/49559462-c755-4be6-8277-c8cc20aeb0e0-kube-api-access-66gnk\") pod \"controller-manager-879f6c89f-qv6pd\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.150391 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-etcd-client\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.150405 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4310c2c4-4ad9-4820-abc9-09f761fa3a71-client-ca\") pod \"route-controller-manager-6576b87f9c-77js8\" (UID: \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.150420 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrq25\" (UniqueName: \"kubernetes.io/projected/4310c2c4-4ad9-4820-abc9-09f761fa3a71-kube-api-access-xrq25\") pod \"route-controller-manager-6576b87f9c-77js8\" (UID: \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.150435 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48p6w\" (UniqueName: \"kubernetes.io/projected/a5c19601-52c5-40bd-8640-3fd0128e7b6a-kube-api-access-48p6w\") pod \"machine-api-operator-5694c8668f-bdbjw\" (UID: \"a5c19601-52c5-40bd-8640-3fd0128e7b6a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bdbjw" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.150448 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/95d6b274-7def-4790-b0ab-bae4d0f8d6db-etcd-serving-ca\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.150462 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95d6b274-7def-4790-b0ab-bae4d0f8d6db-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.150483 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a5c19601-52c5-40bd-8640-3fd0128e7b6a-images\") pod \"machine-api-operator-5694c8668f-bdbjw\" (UID: \"a5c19601-52c5-40bd-8640-3fd0128e7b6a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bdbjw" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.151667 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kt5lf"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.152076 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kt5lf" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.152436 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.152505 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49559462-c755-4be6-8277-c8cc20aeb0e0-serving-cert\") pod \"controller-manager-879f6c89f-qv6pd\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.152509 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bdbjw"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.152563 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/19e32b00-1659-4841-b343-d23e28700081-machine-approver-tls\") pod \"machine-approver-56656f9798-s5xdv\" (UID: \"19e32b00-1659-4841-b343-d23e28700081\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5xdv" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.152592 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2cc7\" (UniqueName: \"kubernetes.io/projected/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-kube-api-access-q2cc7\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.152607 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/95d6b274-7def-4790-b0ab-bae4d0f8d6db-node-pullsecrets\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.152629 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4310c2c4-4ad9-4820-abc9-09f761fa3a71-serving-cert\") pod \"route-controller-manager-6576b87f9c-77js8\" (UID: \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.152660 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-encryption-config\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.152826 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkh97\" (UniqueName: \"kubernetes.io/projected/7014aabc-8352-44c9-964a-30fdbbcb47d9-kube-api-access-nkh97\") pod \"openshift-config-operator-7777fb866f-dtwzq\" (UID: \"7014aabc-8352-44c9-964a-30fdbbcb47d9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.152844 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95d6b274-7def-4790-b0ab-bae4d0f8d6db-audit-dir\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.152931 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8mt2\" (UniqueName: \"kubernetes.io/projected/19e32b00-1659-4841-b343-d23e28700081-kube-api-access-w8mt2\") pod \"machine-approver-56656f9798-s5xdv\" (UID: \"19e32b00-1659-4841-b343-d23e28700081\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5xdv" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.152984 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-serving-cert\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.153191 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a5c19601-52c5-40bd-8640-3fd0128e7b6a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bdbjw\" (UID: \"a5c19601-52c5-40bd-8640-3fd0128e7b6a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bdbjw" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.153213 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/95d6b274-7def-4790-b0ab-bae4d0f8d6db-audit\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.153265 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/19e32b00-1659-4841-b343-d23e28700081-auth-proxy-config\") pod \"machine-approver-56656f9798-s5xdv\" (UID: \"19e32b00-1659-4841-b343-d23e28700081\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5xdv" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.153282 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-audit-dir\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.153333 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e32b00-1659-4841-b343-d23e28700081-config\") pod \"machine-approver-56656f9798-s5xdv\" (UID: \"19e32b00-1659-4841-b343-d23e28700081\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5xdv" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.153349 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49559462-c755-4be6-8277-c8cc20aeb0e0-config\") pod \"controller-manager-879f6c89f-qv6pd\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.153892 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/19e32b00-1659-4841-b343-d23e28700081-auth-proxy-config\") pod \"machine-approver-56656f9798-s5xdv\" (UID: \"19e32b00-1659-4841-b343-d23e28700081\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5xdv" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.154171 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e32b00-1659-4841-b343-d23e28700081-config\") pod \"machine-approver-56656f9798-s5xdv\" (UID: \"19e32b00-1659-4841-b343-d23e28700081\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5xdv" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.154246 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.154275 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7014aabc-8352-44c9-964a-30fdbbcb47d9-serving-cert\") pod \"openshift-config-operator-7777fb866f-dtwzq\" (UID: \"7014aabc-8352-44c9-964a-30fdbbcb47d9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.154290 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp5vh\" (UniqueName: \"kubernetes.io/projected/95d6b274-7def-4790-b0ab-bae4d0f8d6db-kube-api-access-hp5vh\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.154339 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49559462-c755-4be6-8277-c8cc20aeb0e0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qv6pd\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.154361 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm2qq\" (UniqueName: \"kubernetes.io/projected/8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5-kube-api-access-dm2qq\") pod \"console-operator-58897d9998-6gzzl\" (UID: \"8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5\") " pod="openshift-console-operator/console-operator-58897d9998-6gzzl" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.154376 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7014aabc-8352-44c9-964a-30fdbbcb47d9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dtwzq\" (UID: \"7014aabc-8352-44c9-964a-30fdbbcb47d9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.154411 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/95d6b274-7def-4790-b0ab-bae4d0f8d6db-encryption-config\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.154465 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/95d6b274-7def-4790-b0ab-bae4d0f8d6db-image-import-ca\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.154484 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5-serving-cert\") pod \"console-operator-58897d9998-6gzzl\" (UID: \"8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5\") " pod="openshift-console-operator/console-operator-58897d9998-6gzzl" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.154647 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-audit-policies\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.154663 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.154678 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/95d6b274-7def-4790-b0ab-bae4d0f8d6db-etcd-client\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.158690 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.159533 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.160966 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b479q"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.161576 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b479q" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.162929 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/19e32b00-1659-4841-b343-d23e28700081-machine-approver-tls\") pod \"machine-approver-56656f9798-s5xdv\" (UID: \"19e32b00-1659-4841-b343-d23e28700081\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5xdv" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.167594 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.167935 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.168447 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.168573 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.168671 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.168781 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.168879 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.168976 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.169065 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.169150 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.169434 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.169552 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.169835 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.172024 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.172446 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qv6pd"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.172523 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jjxhn"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.173067 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ttltj"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.174785 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ttltj" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.175481 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jjxhn" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.176980 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4f6gt"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.177630 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4f6gt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.179942 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dd8jd"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.180278 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dd8jd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.181978 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xk2p8"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.182310 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2p8" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.182476 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-frtvx"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.183334 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.185816 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.185848 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.186214 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.186851 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.187605 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.188536 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.189082 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.192122 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9blt4"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.192890 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9blt4" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.193049 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2lh2r"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.193898 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2lh2r" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.198908 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.199813 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pvtql"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.203317 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.206014 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.206270 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.215400 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kwjk5"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.216442 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l6ggh"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.218282 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.219195 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-84zsn"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.221987 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4r4f7"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.223787 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.228580 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.241739 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-frdgb"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.250595 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-26fxd"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.252770 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.257635 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5-config\") pod \"console-operator-58897d9998-6gzzl\" (UID: \"8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5\") " pod="openshift-console-operator/console-operator-58897d9998-6gzzl" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.257675 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5-trusted-ca\") pod \"console-operator-58897d9998-6gzzl\" (UID: \"8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5\") " pod="openshift-console-operator/console-operator-58897d9998-6gzzl" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.257705 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d09d33a-6040-4fd1-85a5-ac3a1ca5a913-config\") pod \"kube-controller-manager-operator-78b949d7b-lr5mq\" (UID: \"0d09d33a-6040-4fd1-85a5-ac3a1ca5a913\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lr5mq" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.257726 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gpkx\" (UniqueName: \"kubernetes.io/projected/feba0e47-9667-44da-ab70-50346b203fa6-kube-api-access-2gpkx\") pod \"ingress-operator-5b745b69d9-ppq68\" (UID: \"feba0e47-9667-44da-ab70-50346b203fa6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.257749 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66gnk\" (UniqueName: \"kubernetes.io/projected/49559462-c755-4be6-8277-c8cc20aeb0e0-kube-api-access-66gnk\") pod \"controller-manager-879f6c89f-qv6pd\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.257773 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-etcd-client\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.257795 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4310c2c4-4ad9-4820-abc9-09f761fa3a71-client-ca\") pod \"route-controller-manager-6576b87f9c-77js8\" (UID: \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.257813 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx8q6\" (UniqueName: \"kubernetes.io/projected/1d2a9f2a-efa6-4d3a-b9ec-2d4b40376fc7-kube-api-access-zx8q6\") pod \"downloads-7954f5f757-l6ggh\" (UID: \"1d2a9f2a-efa6-4d3a-b9ec-2d4b40376fc7\") " pod="openshift-console/downloads-7954f5f757-l6ggh" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.257835 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48p6w\" (UniqueName: \"kubernetes.io/projected/a5c19601-52c5-40bd-8640-3fd0128e7b6a-kube-api-access-48p6w\") pod \"machine-api-operator-5694c8668f-bdbjw\" (UID: \"a5c19601-52c5-40bd-8640-3fd0128e7b6a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bdbjw" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.257856 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrq25\" (UniqueName: \"kubernetes.io/projected/4310c2c4-4ad9-4820-abc9-09f761fa3a71-kube-api-access-xrq25\") pod \"route-controller-manager-6576b87f9c-77js8\" (UID: \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.257880 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdx2f\" (UniqueName: \"kubernetes.io/projected/5e0cbf40-e788-44c2-9eba-ddd17d412551-kube-api-access-tdx2f\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.257902 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a5c19601-52c5-40bd-8640-3fd0128e7b6a-images\") pod \"machine-api-operator-5694c8668f-bdbjw\" (UID: \"a5c19601-52c5-40bd-8640-3fd0128e7b6a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bdbjw" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.257920 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49559462-c755-4be6-8277-c8cc20aeb0e0-serving-cert\") pod \"controller-manager-879f6c89f-qv6pd\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.257941 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/95d6b274-7def-4790-b0ab-bae4d0f8d6db-etcd-serving-ca\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.257959 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95d6b274-7def-4790-b0ab-bae4d0f8d6db-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.257980 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5e0cbf40-e788-44c2-9eba-ddd17d412551-console-config\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258002 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npfrr\" (UniqueName: \"kubernetes.io/projected/d077d777-7c83-42d3-9c90-b9155040a1ea-kube-api-access-npfrr\") pod \"control-plane-machine-set-operator-78cbb6b69f-47vf7\" (UID: \"d077d777-7c83-42d3-9c90-b9155040a1ea\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47vf7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258026 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3073870f-c73c-4fcd-8dbf-e8c210aaa197-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zv8vs\" (UID: \"3073870f-c73c-4fcd-8dbf-e8c210aaa197\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zv8vs" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258048 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d077d777-7c83-42d3-9c90-b9155040a1ea-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-47vf7\" (UID: \"d077d777-7c83-42d3-9c90-b9155040a1ea\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47vf7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258079 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2cc7\" (UniqueName: \"kubernetes.io/projected/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-kube-api-access-q2cc7\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258102 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/95d6b274-7def-4790-b0ab-bae4d0f8d6db-node-pullsecrets\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258123 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4310c2c4-4ad9-4820-abc9-09f761fa3a71-serving-cert\") pod \"route-controller-manager-6576b87f9c-77js8\" (UID: \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258143 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmbmr\" (UniqueName: \"kubernetes.io/projected/0e7af0be-101e-4d83-92ab-c88b3cf47a55-kube-api-access-cmbmr\") pod \"dns-operator-744455d44c-84zsn\" (UID: \"0e7af0be-101e-4d83-92ab-c88b3cf47a55\") " pod="openshift-dns-operator/dns-operator-744455d44c-84zsn" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258162 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkh97\" (UniqueName: \"kubernetes.io/projected/7014aabc-8352-44c9-964a-30fdbbcb47d9-kube-api-access-nkh97\") pod \"openshift-config-operator-7777fb866f-dtwzq\" (UID: \"7014aabc-8352-44c9-964a-30fdbbcb47d9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258182 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e0cbf40-e788-44c2-9eba-ddd17d412551-console-serving-cert\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258200 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5e0cbf40-e788-44c2-9eba-ddd17d412551-console-oauth-config\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258222 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-encryption-config\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258252 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95d6b274-7def-4790-b0ab-bae4d0f8d6db-audit-dir\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258270 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-serving-cert\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258289 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqf8f\" (UniqueName: \"kubernetes.io/projected/3073870f-c73c-4fcd-8dbf-e8c210aaa197-kube-api-access-cqf8f\") pod \"openshift-apiserver-operator-796bbdcf4f-zv8vs\" (UID: \"3073870f-c73c-4fcd-8dbf-e8c210aaa197\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zv8vs" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258307 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a5c19601-52c5-40bd-8640-3fd0128e7b6a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bdbjw\" (UID: \"a5c19601-52c5-40bd-8640-3fd0128e7b6a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bdbjw" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258326 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/95d6b274-7def-4790-b0ab-bae4d0f8d6db-audit\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258346 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwxv6\" (UniqueName: \"kubernetes.io/projected/5733d243-c607-42f6-b76a-a4852d2771ff-kube-api-access-jwxv6\") pod \"authentication-operator-69f744f599-r2vs9\" (UID: \"5733d243-c607-42f6-b76a-a4852d2771ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258364 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3073870f-c73c-4fcd-8dbf-e8c210aaa197-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zv8vs\" (UID: \"3073870f-c73c-4fcd-8dbf-e8c210aaa197\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zv8vs" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258384 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-audit-dir\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258402 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49559462-c755-4be6-8277-c8cc20aeb0e0-config\") pod \"controller-manager-879f6c89f-qv6pd\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258419 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/feba0e47-9667-44da-ab70-50346b203fa6-trusted-ca\") pod \"ingress-operator-5b745b69d9-ppq68\" (UID: \"feba0e47-9667-44da-ab70-50346b203fa6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258447 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258468 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7014aabc-8352-44c9-964a-30fdbbcb47d9-serving-cert\") pod \"openshift-config-operator-7777fb866f-dtwzq\" (UID: \"7014aabc-8352-44c9-964a-30fdbbcb47d9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258485 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m49w2\" (UniqueName: \"kubernetes.io/projected/ca2907d2-9fad-41b4-b625-19e05e2884c5-kube-api-access-m49w2\") pod \"machine-config-controller-84d6567774-26fxd\" (UID: \"ca2907d2-9fad-41b4-b625-19e05e2884c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26fxd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258516 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp5vh\" (UniqueName: \"kubernetes.io/projected/95d6b274-7def-4790-b0ab-bae4d0f8d6db-kube-api-access-hp5vh\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258534 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5e0cbf40-e788-44c2-9eba-ddd17d412551-service-ca\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258554 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49559462-c755-4be6-8277-c8cc20aeb0e0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qv6pd\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258583 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm2qq\" (UniqueName: \"kubernetes.io/projected/8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5-kube-api-access-dm2qq\") pod \"console-operator-58897d9998-6gzzl\" (UID: \"8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5\") " pod="openshift-console-operator/console-operator-58897d9998-6gzzl" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258602 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7014aabc-8352-44c9-964a-30fdbbcb47d9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dtwzq\" (UID: \"7014aabc-8352-44c9-964a-30fdbbcb47d9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258623 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5733d243-c607-42f6-b76a-a4852d2771ff-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-r2vs9\" (UID: \"5733d243-c607-42f6-b76a-a4852d2771ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258638 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5733d243-c607-42f6-b76a-a4852d2771ff-service-ca-bundle\") pod \"authentication-operator-69f744f599-r2vs9\" (UID: \"5733d243-c607-42f6-b76a-a4852d2771ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258656 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/95d6b274-7def-4790-b0ab-bae4d0f8d6db-encryption-config\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258676 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca2907d2-9fad-41b4-b625-19e05e2884c5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-26fxd\" (UID: \"ca2907d2-9fad-41b4-b625-19e05e2884c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26fxd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258694 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d09d33a-6040-4fd1-85a5-ac3a1ca5a913-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lr5mq\" (UID: \"0d09d33a-6040-4fd1-85a5-ac3a1ca5a913\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lr5mq" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258709 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e0cbf40-e788-44c2-9eba-ddd17d412551-trusted-ca-bundle\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258727 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/feba0e47-9667-44da-ab70-50346b203fa6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ppq68\" (UID: \"feba0e47-9667-44da-ab70-50346b203fa6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258759 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5733d243-c607-42f6-b76a-a4852d2771ff-config\") pod \"authentication-operator-69f744f599-r2vs9\" (UID: \"5733d243-c607-42f6-b76a-a4852d2771ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258775 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5575e8a-bac5-451e-9419-db009e281ea5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pvtql\" (UID: \"f5575e8a-bac5-451e-9419-db009e281ea5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pvtql" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258795 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hxx8\" (UniqueName: \"kubernetes.io/projected/f5575e8a-bac5-451e-9419-db009e281ea5-kube-api-access-8hxx8\") pod \"openshift-controller-manager-operator-756b6f6bc6-pvtql\" (UID: \"f5575e8a-bac5-451e-9419-db009e281ea5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pvtql" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258825 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d09d33a-6040-4fd1-85a5-ac3a1ca5a913-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lr5mq\" (UID: \"0d09d33a-6040-4fd1-85a5-ac3a1ca5a913\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lr5mq" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258841 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5e0cbf40-e788-44c2-9eba-ddd17d412551-oauth-serving-cert\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258867 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5575e8a-bac5-451e-9419-db009e281ea5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pvtql\" (UID: \"f5575e8a-bac5-451e-9419-db009e281ea5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pvtql" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258886 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-audit-policies\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258904 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258920 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/95d6b274-7def-4790-b0ab-bae4d0f8d6db-etcd-client\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258937 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/95d6b274-7def-4790-b0ab-bae4d0f8d6db-image-import-ca\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258927 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5-config\") pod \"console-operator-58897d9998-6gzzl\" (UID: \"8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5\") " pod="openshift-console-operator/console-operator-58897d9998-6gzzl" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258956 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5-serving-cert\") pod \"console-operator-58897d9998-6gzzl\" (UID: \"8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5\") " pod="openshift-console-operator/console-operator-58897d9998-6gzzl" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.258979 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5733d243-c607-42f6-b76a-a4852d2771ff-serving-cert\") pod \"authentication-operator-69f744f599-r2vs9\" (UID: \"5733d243-c607-42f6-b76a-a4852d2771ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.259054 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/feba0e47-9667-44da-ab70-50346b203fa6-metrics-tls\") pod \"ingress-operator-5b745b69d9-ppq68\" (UID: \"feba0e47-9667-44da-ab70-50346b203fa6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.259831 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a5c19601-52c5-40bd-8640-3fd0128e7b6a-images\") pod \"machine-api-operator-5694c8668f-bdbjw\" (UID: \"a5c19601-52c5-40bd-8640-3fd0128e7b6a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bdbjw" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.261312 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.261855 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/95d6b274-7def-4790-b0ab-bae4d0f8d6db-etcd-serving-ca\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.261481 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4310c2c4-4ad9-4820-abc9-09f761fa3a71-client-ca\") pod \"route-controller-manager-6576b87f9c-77js8\" (UID: \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.273013 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-etcd-client\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.273513 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7014aabc-8352-44c9-964a-30fdbbcb47d9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dtwzq\" (UID: \"7014aabc-8352-44c9-964a-30fdbbcb47d9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.273703 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49559462-c755-4be6-8277-c8cc20aeb0e0-serving-cert\") pod \"controller-manager-879f6c89f-qv6pd\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.273813 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/95d6b274-7def-4790-b0ab-bae4d0f8d6db-node-pullsecrets\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.274037 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-audit-policies\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.274045 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7014aabc-8352-44c9-964a-30fdbbcb47d9-serving-cert\") pod \"openshift-config-operator-7777fb866f-dtwzq\" (UID: \"7014aabc-8352-44c9-964a-30fdbbcb47d9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.274088 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5-trusted-ca\") pod \"console-operator-58897d9998-6gzzl\" (UID: \"8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5\") " pod="openshift-console-operator/console-operator-58897d9998-6gzzl" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.274115 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95d6b274-7def-4790-b0ab-bae4d0f8d6db-audit-dir\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.274620 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.275026 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47vf7"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.275087 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.275569 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-audit-dir\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.275711 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/95d6b274-7def-4790-b0ab-bae4d0f8d6db-audit\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.275922 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dd8jd"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.277020 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49559462-c755-4be6-8277-c8cc20aeb0e0-config\") pod \"controller-manager-879f6c89f-qv6pd\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.277336 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/95d6b274-7def-4790-b0ab-bae4d0f8d6db-encryption-config\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.277457 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49559462-c755-4be6-8277-c8cc20aeb0e0-client-ca\") pod \"controller-manager-879f6c89f-qv6pd\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.277547 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/95d6b274-7def-4790-b0ab-bae4d0f8d6db-image-import-ca\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.277606 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-r2vs9"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.277682 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95d6b274-7def-4790-b0ab-bae4d0f8d6db-config\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.277735 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95d6b274-7def-4790-b0ab-bae4d0f8d6db-serving-cert\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.277832 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5-serving-cert\") pod \"console-operator-58897d9998-6gzzl\" (UID: \"8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5\") " pod="openshift-console-operator/console-operator-58897d9998-6gzzl" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.277837 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95d6b274-7def-4790-b0ab-bae4d0f8d6db-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.278111 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-encryption-config\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.278179 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49559462-c755-4be6-8277-c8cc20aeb0e0-client-ca\") pod \"controller-manager-879f6c89f-qv6pd\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.278178 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5c19601-52c5-40bd-8640-3fd0128e7b6a-config\") pod \"machine-api-operator-5694c8668f-bdbjw\" (UID: \"a5c19601-52c5-40bd-8640-3fd0128e7b6a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bdbjw" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.278262 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95d6b274-7def-4790-b0ab-bae4d0f8d6db-config\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.278275 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4310c2c4-4ad9-4820-abc9-09f761fa3a71-config\") pod \"route-controller-manager-6576b87f9c-77js8\" (UID: \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.278323 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca2907d2-9fad-41b4-b625-19e05e2884c5-proxy-tls\") pod \"machine-config-controller-84d6567774-26fxd\" (UID: \"ca2907d2-9fad-41b4-b625-19e05e2884c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26fxd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.278513 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0e7af0be-101e-4d83-92ab-c88b3cf47a55-metrics-tls\") pod \"dns-operator-744455d44c-84zsn\" (UID: \"0e7af0be-101e-4d83-92ab-c88b3cf47a55\") " pod="openshift-dns-operator/dns-operator-744455d44c-84zsn" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.279106 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5c19601-52c5-40bd-8640-3fd0128e7b6a-config\") pod \"machine-api-operator-5694c8668f-bdbjw\" (UID: \"a5c19601-52c5-40bd-8640-3fd0128e7b6a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bdbjw" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.279415 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49559462-c755-4be6-8277-c8cc20aeb0e0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qv6pd\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.279505 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4310c2c4-4ad9-4820-abc9-09f761fa3a71-config\") pod \"route-controller-manager-6576b87f9c-77js8\" (UID: \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.279535 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jjxhn"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.279957 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a5c19601-52c5-40bd-8640-3fd0128e7b6a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bdbjw\" (UID: \"a5c19601-52c5-40bd-8640-3fd0128e7b6a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bdbjw" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.280135 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4310c2c4-4ad9-4820-abc9-09f761fa3a71-serving-cert\") pod \"route-controller-manager-6576b87f9c-77js8\" (UID: \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.280160 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-serving-cert\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.281133 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lr5mq"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.282341 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-456s8"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.282529 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/95d6b274-7def-4790-b0ab-bae4d0f8d6db-etcd-client\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.283649 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7c24l"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.284801 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95d6b274-7def-4790-b0ab-bae4d0f8d6db-serving-cert\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.286257 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zv8vs"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.286298 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8vhfh"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.287234 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-g77wg"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.288001 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-g77wg" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.288357 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.288935 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.289447 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kt5lf"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.290640 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.291761 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ttltj"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.296813 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4f6gt"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.298003 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.299281 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-jhlzs"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.299896 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jhlzs" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.300374 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.301389 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b479q"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.302403 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mvvnj"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.303798 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-frtvx"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.303882 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.304512 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kl7gk"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.305314 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kl7gk" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.307251 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.308302 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xk2p8"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.308786 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.309313 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jhlzs"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.310871 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9blt4"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.311907 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-g77wg"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.313048 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mvvnj"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.313992 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2lh2r"] Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.328187 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.348777 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.368906 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379280 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3073870f-c73c-4fcd-8dbf-e8c210aaa197-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zv8vs\" (UID: \"3073870f-c73c-4fcd-8dbf-e8c210aaa197\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zv8vs" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379316 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5e0cbf40-e788-44c2-9eba-ddd17d412551-console-config\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379345 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npfrr\" (UniqueName: \"kubernetes.io/projected/d077d777-7c83-42d3-9c90-b9155040a1ea-kube-api-access-npfrr\") pod \"control-plane-machine-set-operator-78cbb6b69f-47vf7\" (UID: \"d077d777-7c83-42d3-9c90-b9155040a1ea\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47vf7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379373 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d077d777-7c83-42d3-9c90-b9155040a1ea-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-47vf7\" (UID: \"d077d777-7c83-42d3-9c90-b9155040a1ea\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47vf7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379418 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmbmr\" (UniqueName: \"kubernetes.io/projected/0e7af0be-101e-4d83-92ab-c88b3cf47a55-kube-api-access-cmbmr\") pod \"dns-operator-744455d44c-84zsn\" (UID: \"0e7af0be-101e-4d83-92ab-c88b3cf47a55\") " pod="openshift-dns-operator/dns-operator-744455d44c-84zsn" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379440 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e0cbf40-e788-44c2-9eba-ddd17d412551-console-serving-cert\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379458 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5e0cbf40-e788-44c2-9eba-ddd17d412551-console-oauth-config\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379526 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqf8f\" (UniqueName: \"kubernetes.io/projected/3073870f-c73c-4fcd-8dbf-e8c210aaa197-kube-api-access-cqf8f\") pod \"openshift-apiserver-operator-796bbdcf4f-zv8vs\" (UID: \"3073870f-c73c-4fcd-8dbf-e8c210aaa197\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zv8vs" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379560 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwxv6\" (UniqueName: \"kubernetes.io/projected/5733d243-c607-42f6-b76a-a4852d2771ff-kube-api-access-jwxv6\") pod \"authentication-operator-69f744f599-r2vs9\" (UID: \"5733d243-c607-42f6-b76a-a4852d2771ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379576 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3073870f-c73c-4fcd-8dbf-e8c210aaa197-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zv8vs\" (UID: \"3073870f-c73c-4fcd-8dbf-e8c210aaa197\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zv8vs" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379596 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/feba0e47-9667-44da-ab70-50346b203fa6-trusted-ca\") pod \"ingress-operator-5b745b69d9-ppq68\" (UID: \"feba0e47-9667-44da-ab70-50346b203fa6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379647 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m49w2\" (UniqueName: \"kubernetes.io/projected/ca2907d2-9fad-41b4-b625-19e05e2884c5-kube-api-access-m49w2\") pod \"machine-config-controller-84d6567774-26fxd\" (UID: \"ca2907d2-9fad-41b4-b625-19e05e2884c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26fxd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379670 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5e0cbf40-e788-44c2-9eba-ddd17d412551-service-ca\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379711 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5733d243-c607-42f6-b76a-a4852d2771ff-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-r2vs9\" (UID: \"5733d243-c607-42f6-b76a-a4852d2771ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379728 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5733d243-c607-42f6-b76a-a4852d2771ff-service-ca-bundle\") pod \"authentication-operator-69f744f599-r2vs9\" (UID: \"5733d243-c607-42f6-b76a-a4852d2771ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379746 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca2907d2-9fad-41b4-b625-19e05e2884c5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-26fxd\" (UID: \"ca2907d2-9fad-41b4-b625-19e05e2884c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26fxd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379789 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d09d33a-6040-4fd1-85a5-ac3a1ca5a913-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lr5mq\" (UID: \"0d09d33a-6040-4fd1-85a5-ac3a1ca5a913\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lr5mq" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379807 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e0cbf40-e788-44c2-9eba-ddd17d412551-trusted-ca-bundle\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379826 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/feba0e47-9667-44da-ab70-50346b203fa6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ppq68\" (UID: \"feba0e47-9667-44da-ab70-50346b203fa6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379849 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5733d243-c607-42f6-b76a-a4852d2771ff-config\") pod \"authentication-operator-69f744f599-r2vs9\" (UID: \"5733d243-c607-42f6-b76a-a4852d2771ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379866 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5575e8a-bac5-451e-9419-db009e281ea5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pvtql\" (UID: \"f5575e8a-bac5-451e-9419-db009e281ea5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pvtql" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379884 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hxx8\" (UniqueName: \"kubernetes.io/projected/f5575e8a-bac5-451e-9419-db009e281ea5-kube-api-access-8hxx8\") pod \"openshift-controller-manager-operator-756b6f6bc6-pvtql\" (UID: \"f5575e8a-bac5-451e-9419-db009e281ea5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pvtql" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379910 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d09d33a-6040-4fd1-85a5-ac3a1ca5a913-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lr5mq\" (UID: \"0d09d33a-6040-4fd1-85a5-ac3a1ca5a913\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lr5mq" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379929 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5e0cbf40-e788-44c2-9eba-ddd17d412551-oauth-serving-cert\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379946 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5575e8a-bac5-451e-9419-db009e281ea5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pvtql\" (UID: \"f5575e8a-bac5-451e-9419-db009e281ea5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pvtql" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.379963 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5733d243-c607-42f6-b76a-a4852d2771ff-serving-cert\") pod \"authentication-operator-69f744f599-r2vs9\" (UID: \"5733d243-c607-42f6-b76a-a4852d2771ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.380000 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/feba0e47-9667-44da-ab70-50346b203fa6-metrics-tls\") pod \"ingress-operator-5b745b69d9-ppq68\" (UID: \"feba0e47-9667-44da-ab70-50346b203fa6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.380040 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca2907d2-9fad-41b4-b625-19e05e2884c5-proxy-tls\") pod \"machine-config-controller-84d6567774-26fxd\" (UID: \"ca2907d2-9fad-41b4-b625-19e05e2884c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26fxd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.380058 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0e7af0be-101e-4d83-92ab-c88b3cf47a55-metrics-tls\") pod \"dns-operator-744455d44c-84zsn\" (UID: \"0e7af0be-101e-4d83-92ab-c88b3cf47a55\") " pod="openshift-dns-operator/dns-operator-744455d44c-84zsn" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.380075 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d09d33a-6040-4fd1-85a5-ac3a1ca5a913-config\") pod \"kube-controller-manager-operator-78b949d7b-lr5mq\" (UID: \"0d09d33a-6040-4fd1-85a5-ac3a1ca5a913\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lr5mq" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.380129 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gpkx\" (UniqueName: \"kubernetes.io/projected/feba0e47-9667-44da-ab70-50346b203fa6-kube-api-access-2gpkx\") pod \"ingress-operator-5b745b69d9-ppq68\" (UID: \"feba0e47-9667-44da-ab70-50346b203fa6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.380159 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx8q6\" (UniqueName: \"kubernetes.io/projected/1d2a9f2a-efa6-4d3a-b9ec-2d4b40376fc7-kube-api-access-zx8q6\") pod \"downloads-7954f5f757-l6ggh\" (UID: \"1d2a9f2a-efa6-4d3a-b9ec-2d4b40376fc7\") " pod="openshift-console/downloads-7954f5f757-l6ggh" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.380186 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdx2f\" (UniqueName: \"kubernetes.io/projected/5e0cbf40-e788-44c2-9eba-ddd17d412551-kube-api-access-tdx2f\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.380379 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5733d243-c607-42f6-b76a-a4852d2771ff-service-ca-bundle\") pod \"authentication-operator-69f744f599-r2vs9\" (UID: \"5733d243-c607-42f6-b76a-a4852d2771ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.380772 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5733d243-c607-42f6-b76a-a4852d2771ff-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-r2vs9\" (UID: \"5733d243-c607-42f6-b76a-a4852d2771ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.380892 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca2907d2-9fad-41b4-b625-19e05e2884c5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-26fxd\" (UID: \"ca2907d2-9fad-41b4-b625-19e05e2884c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26fxd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.380928 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5733d243-c607-42f6-b76a-a4852d2771ff-config\") pod \"authentication-operator-69f744f599-r2vs9\" (UID: \"5733d243-c607-42f6-b76a-a4852d2771ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.381209 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5e0cbf40-e788-44c2-9eba-ddd17d412551-service-ca\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.381867 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e0cbf40-e788-44c2-9eba-ddd17d412551-trusted-ca-bundle\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.382025 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3073870f-c73c-4fcd-8dbf-e8c210aaa197-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zv8vs\" (UID: \"3073870f-c73c-4fcd-8dbf-e8c210aaa197\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zv8vs" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.382580 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5e0cbf40-e788-44c2-9eba-ddd17d412551-oauth-serving-cert\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.382650 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5575e8a-bac5-451e-9419-db009e281ea5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pvtql\" (UID: \"f5575e8a-bac5-451e-9419-db009e281ea5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pvtql" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.382850 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5e0cbf40-e788-44c2-9eba-ddd17d412551-console-config\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.383210 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5e0cbf40-e788-44c2-9eba-ddd17d412551-console-oauth-config\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.383459 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3073870f-c73c-4fcd-8dbf-e8c210aaa197-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zv8vs\" (UID: \"3073870f-c73c-4fcd-8dbf-e8c210aaa197\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zv8vs" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.383947 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d077d777-7c83-42d3-9c90-b9155040a1ea-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-47vf7\" (UID: \"d077d777-7c83-42d3-9c90-b9155040a1ea\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47vf7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.384343 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0e7af0be-101e-4d83-92ab-c88b3cf47a55-metrics-tls\") pod \"dns-operator-744455d44c-84zsn\" (UID: \"0e7af0be-101e-4d83-92ab-c88b3cf47a55\") " pod="openshift-dns-operator/dns-operator-744455d44c-84zsn" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.384546 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5575e8a-bac5-451e-9419-db009e281ea5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pvtql\" (UID: \"f5575e8a-bac5-451e-9419-db009e281ea5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pvtql" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.384698 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e0cbf40-e788-44c2-9eba-ddd17d412551-console-serving-cert\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.385112 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5733d243-c607-42f6-b76a-a4852d2771ff-serving-cert\") pod \"authentication-operator-69f744f599-r2vs9\" (UID: \"5733d243-c607-42f6-b76a-a4852d2771ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.388356 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.409813 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.429234 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.475085 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.480675 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.489909 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.508727 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.528046 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.549852 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.569249 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.599875 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.608910 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.629772 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.649599 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.657238 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/feba0e47-9667-44da-ab70-50346b203fa6-metrics-tls\") pod \"ingress-operator-5b745b69d9-ppq68\" (UID: \"feba0e47-9667-44da-ab70-50346b203fa6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.678311 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.684114 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/feba0e47-9667-44da-ab70-50346b203fa6-trusted-ca\") pod \"ingress-operator-5b745b69d9-ppq68\" (UID: \"feba0e47-9667-44da-ab70-50346b203fa6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.689362 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.709120 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.729785 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.749120 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.769205 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.789714 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.810110 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.815224 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca2907d2-9fad-41b4-b625-19e05e2884c5-proxy-tls\") pod \"machine-config-controller-84d6567774-26fxd\" (UID: \"ca2907d2-9fad-41b4-b625-19e05e2884c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26fxd" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.829796 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.850197 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.869748 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.889283 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.896630 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d09d33a-6040-4fd1-85a5-ac3a1ca5a913-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lr5mq\" (UID: \"0d09d33a-6040-4fd1-85a5-ac3a1ca5a913\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lr5mq" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.909925 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.912006 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d09d33a-6040-4fd1-85a5-ac3a1ca5a913-config\") pod \"kube-controller-manager-operator-78b949d7b-lr5mq\" (UID: \"0d09d33a-6040-4fd1-85a5-ac3a1ca5a913\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lr5mq" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.929385 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.949084 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.970047 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 11:55:17 crc kubenswrapper[5030]: I1128 11:55:17.988823 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.027119 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8mt2\" (UniqueName: \"kubernetes.io/projected/19e32b00-1659-4841-b343-d23e28700081-kube-api-access-w8mt2\") pod \"machine-approver-56656f9798-s5xdv\" (UID: \"19e32b00-1659-4841-b343-d23e28700081\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5xdv" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.050337 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.069634 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.088637 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.108251 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.129819 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.150046 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.168877 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.187586 5030 request.go:700] Waited for 1.010890345s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&limit=500&resourceVersion=0 Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.189592 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.209752 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.229701 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.245714 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5xdv" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.250062 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.270228 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 28 11:55:18 crc kubenswrapper[5030]: W1128 11:55:18.275130 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19e32b00_1659_4841_b343_d23e28700081.slice/crio-c81ef3da89b6550ade45b3b87941cae09e30195b3e91d4421ca8bfd92f7481d7 WatchSource:0}: Error finding container c81ef3da89b6550ade45b3b87941cae09e30195b3e91d4421ca8bfd92f7481d7: Status 404 returned error can't find the container with id c81ef3da89b6550ade45b3b87941cae09e30195b3e91d4421ca8bfd92f7481d7 Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.290329 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.310213 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.329799 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.350193 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.372791 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.389529 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.409229 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.429327 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.450038 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.469768 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.490711 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.522441 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.529760 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.550249 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.569508 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.590435 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.610355 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.629694 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.649789 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.670823 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.690356 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.708914 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.729970 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.750119 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.775457 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.838259 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66gnk\" (UniqueName: \"kubernetes.io/projected/49559462-c755-4be6-8277-c8cc20aeb0e0-kube-api-access-66gnk\") pod \"controller-manager-879f6c89f-qv6pd\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.861632 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrq25\" (UniqueName: \"kubernetes.io/projected/4310c2c4-4ad9-4820-abc9-09f761fa3a71-kube-api-access-xrq25\") pod \"route-controller-manager-6576b87f9c-77js8\" (UID: \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.881867 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.885906 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48p6w\" (UniqueName: \"kubernetes.io/projected/a5c19601-52c5-40bd-8640-3fd0128e7b6a-kube-api-access-48p6w\") pod \"machine-api-operator-5694c8668f-bdbjw\" (UID: \"a5c19601-52c5-40bd-8640-3fd0128e7b6a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bdbjw" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.899515 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp5vh\" (UniqueName: \"kubernetes.io/projected/95d6b274-7def-4790-b0ab-bae4d0f8d6db-kube-api-access-hp5vh\") pod \"apiserver-76f77b778f-mnz5b\" (UID: \"95d6b274-7def-4790-b0ab-bae4d0f8d6db\") " pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.916653 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2cc7\" (UniqueName: \"kubernetes.io/projected/41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d-kube-api-access-q2cc7\") pod \"apiserver-7bbb656c7d-b4rgr\" (UID: \"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.939914 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkh97\" (UniqueName: \"kubernetes.io/projected/7014aabc-8352-44c9-964a-30fdbbcb47d9-kube-api-access-nkh97\") pod \"openshift-config-operator-7777fb866f-dtwzq\" (UID: \"7014aabc-8352-44c9-964a-30fdbbcb47d9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.950412 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.960200 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm2qq\" (UniqueName: \"kubernetes.io/projected/8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5-kube-api-access-dm2qq\") pod \"console-operator-58897d9998-6gzzl\" (UID: \"8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5\") " pod="openshift-console-operator/console-operator-58897d9998-6gzzl" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.970187 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 28 11:55:18 crc kubenswrapper[5030]: I1128 11:55:18.989431 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.008866 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.029302 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.049121 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.069629 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.080625 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.090226 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.098716 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.109066 5030 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.134151 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.134981 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.149822 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.156569 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bdbjw" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.169987 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.175283 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.197446 5030 request.go:700] Waited for 1.891790087s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0 Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.201060 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.233140 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npfrr\" (UniqueName: \"kubernetes.io/projected/d077d777-7c83-42d3-9c90-b9155040a1ea-kube-api-access-npfrr\") pod \"control-plane-machine-set-operator-78cbb6b69f-47vf7\" (UID: \"d077d777-7c83-42d3-9c90-b9155040a1ea\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47vf7" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.244412 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqf8f\" (UniqueName: \"kubernetes.io/projected/3073870f-c73c-4fcd-8dbf-e8c210aaa197-kube-api-access-cqf8f\") pod \"openshift-apiserver-operator-796bbdcf4f-zv8vs\" (UID: \"3073870f-c73c-4fcd-8dbf-e8c210aaa197\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zv8vs" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.247162 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5xdv" event={"ID":"19e32b00-1659-4841-b343-d23e28700081","Type":"ContainerStarted","Data":"445cb62eea7d3d93b9265c307b6734b02ec7cd2cef50f78901bee96d1d5cc9ae"} Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.247200 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5xdv" event={"ID":"19e32b00-1659-4841-b343-d23e28700081","Type":"ContainerStarted","Data":"c81ef3da89b6550ade45b3b87941cae09e30195b3e91d4421ca8bfd92f7481d7"} Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.252770 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6gzzl" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.269192 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmbmr\" (UniqueName: \"kubernetes.io/projected/0e7af0be-101e-4d83-92ab-c88b3cf47a55-kube-api-access-cmbmr\") pod \"dns-operator-744455d44c-84zsn\" (UID: \"0e7af0be-101e-4d83-92ab-c88b3cf47a55\") " pod="openshift-dns-operator/dns-operator-744455d44c-84zsn" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.285198 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwxv6\" (UniqueName: \"kubernetes.io/projected/5733d243-c607-42f6-b76a-a4852d2771ff-kube-api-access-jwxv6\") pod \"authentication-operator-69f744f599-r2vs9\" (UID: \"5733d243-c607-42f6-b76a-a4852d2771ff\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.307981 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.314810 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m49w2\" (UniqueName: \"kubernetes.io/projected/ca2907d2-9fad-41b4-b625-19e05e2884c5-kube-api-access-m49w2\") pod \"machine-config-controller-84d6567774-26fxd\" (UID: \"ca2907d2-9fad-41b4-b625-19e05e2884c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26fxd" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.322074 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8"] Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.328866 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/feba0e47-9667-44da-ab70-50346b203fa6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ppq68\" (UID: \"feba0e47-9667-44da-ab70-50346b203fa6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.349688 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-84zsn" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.351207 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdx2f\" (UniqueName: \"kubernetes.io/projected/5e0cbf40-e788-44c2-9eba-ddd17d412551-kube-api-access-tdx2f\") pod \"console-f9d7485db-4r4f7\" (UID: \"5e0cbf40-e788-44c2-9eba-ddd17d412551\") " pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.367159 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gpkx\" (UniqueName: \"kubernetes.io/projected/feba0e47-9667-44da-ab70-50346b203fa6-kube-api-access-2gpkx\") pod \"ingress-operator-5b745b69d9-ppq68\" (UID: \"feba0e47-9667-44da-ab70-50346b203fa6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.383029 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx8q6\" (UniqueName: \"kubernetes.io/projected/1d2a9f2a-efa6-4d3a-b9ec-2d4b40376fc7-kube-api-access-zx8q6\") pod \"downloads-7954f5f757-l6ggh\" (UID: \"1d2a9f2a-efa6-4d3a-b9ec-2d4b40376fc7\") " pod="openshift-console/downloads-7954f5f757-l6ggh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.406302 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47vf7" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.408353 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr"] Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.408544 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zv8vs" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.411016 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d09d33a-6040-4fd1-85a5-ac3a1ca5a913-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lr5mq\" (UID: \"0d09d33a-6040-4fd1-85a5-ac3a1ca5a913\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lr5mq" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.430616 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qv6pd"] Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.436221 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hxx8\" (UniqueName: \"kubernetes.io/projected/f5575e8a-bac5-451e-9419-db009e281ea5-kube-api-access-8hxx8\") pod \"openshift-controller-manager-operator-756b6f6bc6-pvtql\" (UID: \"f5575e8a-bac5-451e-9419-db009e281ea5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pvtql" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.444883 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.461665 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26fxd" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.463937 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mnz5b"] Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.466401 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lr5mq" Nov 28 11:55:19 crc kubenswrapper[5030]: W1128 11:55:19.505990 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95d6b274_7def_4790_b0ab_bae4d0f8d6db.slice/crio-ba60b5f77967f8af7e56c717e457e9c1d306e857c27731a8ce6a1353f1f3d6bd WatchSource:0}: Error finding container ba60b5f77967f8af7e56c717e457e9c1d306e857c27731a8ce6a1353f1f3d6bd: Status 404 returned error can't find the container with id ba60b5f77967f8af7e56c717e457e9c1d306e857c27731a8ce6a1353f1f3d6bd Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.507943 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l6ggh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514094 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0623247c-d46a-4e16-8731-cdd6d2f4a16a-registry-tls\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514129 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45-config\") pod \"etcd-operator-b45778765-7c24l\" (UID: \"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514164 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/19432356-d767-4580-9cec-6366011c203c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-btkdm\" (UID: \"19432356-d767-4580-9cec-6366011c203c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514189 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0623247c-d46a-4e16-8731-cdd6d2f4a16a-trusted-ca\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514207 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514226 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45-etcd-ca\") pod \"etcd-operator-b45778765-7c24l\" (UID: \"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514252 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514275 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514304 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8rkt\" (UniqueName: \"kubernetes.io/projected/0623247c-d46a-4e16-8731-cdd6d2f4a16a-kube-api-access-p8rkt\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514325 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514351 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/834d512d-9d01-48e1-a5a7-035d0e68cccd-config\") pod \"kube-apiserver-operator-766d6c64bb-kwjk5\" (UID: \"834d512d-9d01-48e1-a5a7-035d0e68cccd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kwjk5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514369 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-audit-policies\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514387 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpkxl\" (UniqueName: \"kubernetes.io/projected/1de8e8de-7aad-4d28-937b-d13eea43e672-kube-api-access-dpkxl\") pod \"kube-storage-version-migrator-operator-b67b599dd-frdgb\" (UID: \"1de8e8de-7aad-4d28-937b-d13eea43e672\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-frdgb" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514437 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/834d512d-9d01-48e1-a5a7-035d0e68cccd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-kwjk5\" (UID: \"834d512d-9d01-48e1-a5a7-035d0e68cccd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kwjk5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514458 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cf3562d8-1f85-460c-b49a-c2922d803c5a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kt5lf\" (UID: \"cf3562d8-1f85-460c-b49a-c2922d803c5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kt5lf" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514556 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514578 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514597 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxb7x\" (UniqueName: \"kubernetes.io/projected/19432356-d767-4580-9cec-6366011c203c-kube-api-access-fxb7x\") pod \"cluster-image-registry-operator-dc59b4c8b-btkdm\" (UID: \"19432356-d767-4580-9cec-6366011c203c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514627 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cd9592cc-918c-4863-a561-61372a85c43f-audit-dir\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514645 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/273c4d4b-6972-435b-9fda-e802384dffd2-default-certificate\") pod \"router-default-5444994796-dz6n5\" (UID: \"273c4d4b-6972-435b-9fda-e802384dffd2\") " pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514670 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45-etcd-service-ca\") pod \"etcd-operator-b45778765-7c24l\" (UID: \"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514691 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514710 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1de8e8de-7aad-4d28-937b-d13eea43e672-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-frdgb\" (UID: \"1de8e8de-7aad-4d28-937b-d13eea43e672\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-frdgb" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514731 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514752 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514772 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0623247c-d46a-4e16-8731-cdd6d2f4a16a-registry-certificates\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514790 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/273c4d4b-6972-435b-9fda-e802384dffd2-metrics-certs\") pod \"router-default-5444994796-dz6n5\" (UID: \"273c4d4b-6972-435b-9fda-e802384dffd2\") " pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514809 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0623247c-d46a-4e16-8731-cdd6d2f4a16a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: E1128 11:55:19.514836 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:20.014821292 +0000 UTC m=+137.956563975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.514858 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/273c4d4b-6972-435b-9fda-e802384dffd2-stats-auth\") pod \"router-default-5444994796-dz6n5\" (UID: \"273c4d4b-6972-435b-9fda-e802384dffd2\") " pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.515131 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh8sj\" (UniqueName: \"kubernetes.io/projected/273c4d4b-6972-435b-9fda-e802384dffd2-kube-api-access-dh8sj\") pod \"router-default-5444994796-dz6n5\" (UID: \"273c4d4b-6972-435b-9fda-e802384dffd2\") " pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.515153 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45-serving-cert\") pod \"etcd-operator-b45778765-7c24l\" (UID: \"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.515173 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdbvg\" (UniqueName: \"kubernetes.io/projected/cd9592cc-918c-4863-a561-61372a85c43f-kube-api-access-qdbvg\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.515481 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/834d512d-9d01-48e1-a5a7-035d0e68cccd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-kwjk5\" (UID: \"834d512d-9d01-48e1-a5a7-035d0e68cccd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kwjk5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.515689 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1de8e8de-7aad-4d28-937b-d13eea43e672-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-frdgb\" (UID: \"1de8e8de-7aad-4d28-937b-d13eea43e672\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-frdgb" Nov 28 11:55:19 crc kubenswrapper[5030]: W1128 11:55:19.515718 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49559462_c755_4be6_8277_c8cc20aeb0e0.slice/crio-0d16d752196f4427bca0f48b922ffac81a5f58c9f76fb480331a0d9c4a63ea48 WatchSource:0}: Error finding container 0d16d752196f4427bca0f48b922ffac81a5f58c9f76fb480331a0d9c4a63ea48: Status 404 returned error can't find the container with id 0d16d752196f4427bca0f48b922ffac81a5f58c9f76fb480331a0d9c4a63ea48 Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.515904 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45-etcd-client\") pod \"etcd-operator-b45778765-7c24l\" (UID: \"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.515953 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.515976 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19432356-d767-4580-9cec-6366011c203c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-btkdm\" (UID: \"19432356-d767-4580-9cec-6366011c203c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.516022 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2ssx\" (UniqueName: \"kubernetes.io/projected/75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45-kube-api-access-g2ssx\") pod \"etcd-operator-b45778765-7c24l\" (UID: \"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.516659 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.516786 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0623247c-d46a-4e16-8731-cdd6d2f4a16a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.517286 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.517335 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19432356-d767-4580-9cec-6366011c203c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-btkdm\" (UID: \"19432356-d767-4580-9cec-6366011c203c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.517462 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/273c4d4b-6972-435b-9fda-e802384dffd2-service-ca-bundle\") pod \"router-default-5444994796-dz6n5\" (UID: \"273c4d4b-6972-435b-9fda-e802384dffd2\") " pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.517498 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjctg\" (UniqueName: \"kubernetes.io/projected/cf3562d8-1f85-460c-b49a-c2922d803c5a-kube-api-access-wjctg\") pod \"cluster-samples-operator-665b6dd947-kt5lf\" (UID: \"cf3562d8-1f85-460c-b49a-c2922d803c5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kt5lf" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.517533 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0623247c-d46a-4e16-8731-cdd6d2f4a16a-bound-sa-token\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.576889 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6gzzl"] Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.579514 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pvtql" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.586286 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.622239 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.622555 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.622617 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.622642 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxb7x\" (UniqueName: \"kubernetes.io/projected/19432356-d767-4580-9cec-6366011c203c-kube-api-access-fxb7x\") pod \"cluster-image-registry-operator-dc59b4c8b-btkdm\" (UID: \"19432356-d767-4580-9cec-6366011c203c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.622701 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cd9592cc-918c-4863-a561-61372a85c43f-audit-dir\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.622735 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2952p\" (UniqueName: \"kubernetes.io/projected/3c3795a8-94c8-4eee-9791-f18e22d36c09-kube-api-access-2952p\") pod \"service-ca-operator-777779d784-xk2p8\" (UID: \"3c3795a8-94c8-4eee-9791-f18e22d36c09\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2p8" Nov 28 11:55:19 crc kubenswrapper[5030]: E1128 11:55:19.622786 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:20.122741961 +0000 UTC m=+138.064484644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.622853 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d1425f4-1e94-443c-bb47-1a473f584069-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jjxhn\" (UID: \"4d1425f4-1e94-443c-bb47-1a473f584069\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jjxhn" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.622906 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mqgh\" (UniqueName: \"kubernetes.io/projected/e8700055-6a97-470b-93de-aefe1758239b-kube-api-access-2mqgh\") pod \"olm-operator-6b444d44fb-dd8jd\" (UID: \"e8700055-6a97-470b-93de-aefe1758239b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dd8jd" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.622968 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/273c4d4b-6972-435b-9fda-e802384dffd2-default-certificate\") pod \"router-default-5444994796-dz6n5\" (UID: \"273c4d4b-6972-435b-9fda-e802384dffd2\") " pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.622991 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rbg9\" (UniqueName: \"kubernetes.io/projected/235ffe06-65ea-4f0e-90b8-1b9ed56df5bf-kube-api-access-8rbg9\") pod \"marketplace-operator-79b997595-frtvx\" (UID: \"235ffe06-65ea-4f0e-90b8-1b9ed56df5bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623015 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45-etcd-service-ca\") pod \"etcd-operator-b45778765-7c24l\" (UID: \"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623037 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d0531246-fb61-45e1-943f-dbba72d91633-csi-data-dir\") pod \"csi-hostpathplugin-mvvnj\" (UID: \"d0531246-fb61-45e1-943f-dbba72d91633\") " pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623091 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623111 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f677af71-4fb9-41a5-99f4-59800a8de3b7-proxy-tls\") pod \"machine-config-operator-74547568cd-b479q\" (UID: \"f677af71-4fb9-41a5-99f4-59800a8de3b7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b479q" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623135 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1de8e8de-7aad-4d28-937b-d13eea43e672-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-frdgb\" (UID: \"1de8e8de-7aad-4d28-937b-d13eea43e672\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-frdgb" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623154 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623171 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/235ffe06-65ea-4f0e-90b8-1b9ed56df5bf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-frtvx\" (UID: \"235ffe06-65ea-4f0e-90b8-1b9ed56df5bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623201 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d1425f4-1e94-443c-bb47-1a473f584069-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jjxhn\" (UID: \"4d1425f4-1e94-443c-bb47-1a473f584069\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jjxhn" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623219 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d1425f4-1e94-443c-bb47-1a473f584069-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jjxhn\" (UID: \"4d1425f4-1e94-443c-bb47-1a473f584069\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jjxhn" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623255 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623271 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62975b1e-d898-42d0-8f46-27c47287d53b-srv-cert\") pod \"catalog-operator-68c6474976-wkwgz\" (UID: \"62975b1e-d898-42d0-8f46-27c47287d53b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623313 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0623247c-d46a-4e16-8731-cdd6d2f4a16a-registry-certificates\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623339 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/273c4d4b-6972-435b-9fda-e802384dffd2-metrics-certs\") pod \"router-default-5444994796-dz6n5\" (UID: \"273c4d4b-6972-435b-9fda-e802384dffd2\") " pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623372 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d0531246-fb61-45e1-943f-dbba72d91633-registration-dir\") pod \"csi-hostpathplugin-mvvnj\" (UID: \"d0531246-fb61-45e1-943f-dbba72d91633\") " pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623410 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0623247c-d46a-4e16-8731-cdd6d2f4a16a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623428 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97wj2\" (UniqueName: \"kubernetes.io/projected/f362eedc-734d-4cfd-831c-6dedca53f975-kube-api-access-97wj2\") pod \"multus-admission-controller-857f4d67dd-4f6gt\" (UID: \"f362eedc-734d-4cfd-831c-6dedca53f975\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4f6gt" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623472 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lczk6\" (UniqueName: \"kubernetes.io/projected/f79ac060-fa0b-487f-a959-90da3f7e1fa5-kube-api-access-lczk6\") pod \"ingress-canary-jhlzs\" (UID: \"f79ac060-fa0b-487f-a959-90da3f7e1fa5\") " pod="openshift-ingress-canary/ingress-canary-jhlzs" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623507 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/273c4d4b-6972-435b-9fda-e802384dffd2-stats-auth\") pod \"router-default-5444994796-dz6n5\" (UID: \"273c4d4b-6972-435b-9fda-e802384dffd2\") " pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623528 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh8sj\" (UniqueName: \"kubernetes.io/projected/273c4d4b-6972-435b-9fda-e802384dffd2-kube-api-access-dh8sj\") pod \"router-default-5444994796-dz6n5\" (UID: \"273c4d4b-6972-435b-9fda-e802384dffd2\") " pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623549 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45-serving-cert\") pod \"etcd-operator-b45778765-7c24l\" (UID: \"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623570 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdbvg\" (UniqueName: \"kubernetes.io/projected/cd9592cc-918c-4863-a561-61372a85c43f-kube-api-access-qdbvg\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623592 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62975b1e-d898-42d0-8f46-27c47287d53b-profile-collector-cert\") pod \"catalog-operator-68c6474976-wkwgz\" (UID: \"62975b1e-d898-42d0-8f46-27c47287d53b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623629 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/235ffe06-65ea-4f0e-90b8-1b9ed56df5bf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-frtvx\" (UID: \"235ffe06-65ea-4f0e-90b8-1b9ed56df5bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623644 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a9a6f10-2c46-4625-8fc0-8522d9082086-tmpfs\") pod \"packageserver-d55dfcdfc-ndgk2\" (UID: \"8a9a6f10-2c46-4625-8fc0-8522d9082086\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623683 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d0531246-fb61-45e1-943f-dbba72d91633-mountpoint-dir\") pod \"csi-hostpathplugin-mvvnj\" (UID: \"d0531246-fb61-45e1-943f-dbba72d91633\") " pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623705 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmr4b\" (UniqueName: \"kubernetes.io/projected/62975b1e-d898-42d0-8f46-27c47287d53b-kube-api-access-vmr4b\") pod \"catalog-operator-68c6474976-wkwgz\" (UID: \"62975b1e-d898-42d0-8f46-27c47287d53b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623761 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/834d512d-9d01-48e1-a5a7-035d0e68cccd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-kwjk5\" (UID: \"834d512d-9d01-48e1-a5a7-035d0e68cccd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kwjk5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623788 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c3795a8-94c8-4eee-9791-f18e22d36c09-serving-cert\") pod \"service-ca-operator-777779d784-xk2p8\" (UID: \"3c3795a8-94c8-4eee-9791-f18e22d36c09\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2p8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623820 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89d4a423-452d-4b92-927e-38eadd969e03-secret-volume\") pod \"collect-profiles-29405505-2mvmw\" (UID: \"89d4a423-452d-4b92-927e-38eadd969e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623873 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e8700055-6a97-470b-93de-aefe1758239b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dd8jd\" (UID: \"e8700055-6a97-470b-93de-aefe1758239b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dd8jd" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623919 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1de8e8de-7aad-4d28-937b-d13eea43e672-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-frdgb\" (UID: \"1de8e8de-7aad-4d28-937b-d13eea43e672\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-frdgb" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623941 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c3795a8-94c8-4eee-9791-f18e22d36c09-config\") pod \"service-ca-operator-777779d784-xk2p8\" (UID: \"3c3795a8-94c8-4eee-9791-f18e22d36c09\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2p8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623965 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45-etcd-client\") pod \"etcd-operator-b45778765-7c24l\" (UID: \"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.623985 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns7vs\" (UniqueName: \"kubernetes.io/projected/89d4a423-452d-4b92-927e-38eadd969e03-kube-api-access-ns7vs\") pod \"collect-profiles-29405505-2mvmw\" (UID: \"89d4a423-452d-4b92-927e-38eadd969e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.624018 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.624036 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2ssx\" (UniqueName: \"kubernetes.io/projected/75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45-kube-api-access-g2ssx\") pod \"etcd-operator-b45778765-7c24l\" (UID: \"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.624088 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19432356-d767-4580-9cec-6366011c203c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-btkdm\" (UID: \"19432356-d767-4580-9cec-6366011c203c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.624110 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f362eedc-734d-4cfd-831c-6dedca53f975-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4f6gt\" (UID: \"f362eedc-734d-4cfd-831c-6dedca53f975\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4f6gt" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.624133 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.624164 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0623247c-d46a-4e16-8731-cdd6d2f4a16a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.624187 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7gv4\" (UniqueName: \"kubernetes.io/projected/814e1602-11f5-41ce-be92-9cefbb6dbe78-kube-api-access-l7gv4\") pod \"package-server-manager-789f6589d5-2lh2r\" (UID: \"814e1602-11f5-41ce-be92-9cefbb6dbe78\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2lh2r" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.624203 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f79ac060-fa0b-487f-a959-90da3f7e1fa5-cert\") pod \"ingress-canary-jhlzs\" (UID: \"f79ac060-fa0b-487f-a959-90da3f7e1fa5\") " pod="openshift-ingress-canary/ingress-canary-jhlzs" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.624222 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.624242 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19432356-d767-4580-9cec-6366011c203c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-btkdm\" (UID: \"19432356-d767-4580-9cec-6366011c203c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.624261 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/273c4d4b-6972-435b-9fda-e802384dffd2-service-ca-bundle\") pod \"router-default-5444994796-dz6n5\" (UID: \"273c4d4b-6972-435b-9fda-e802384dffd2\") " pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.624280 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjctg\" (UniqueName: \"kubernetes.io/projected/cf3562d8-1f85-460c-b49a-c2922d803c5a-kube-api-access-wjctg\") pod \"cluster-samples-operator-665b6dd947-kt5lf\" (UID: \"cf3562d8-1f85-460c-b49a-c2922d803c5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kt5lf" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.624766 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76z9n\" (UniqueName: \"kubernetes.io/projected/833da764-f289-48e0-9321-57c4cab21e41-kube-api-access-76z9n\") pod \"service-ca-9c57cc56f-9blt4\" (UID: \"833da764-f289-48e0-9321-57c4cab21e41\") " pod="openshift-service-ca/service-ca-9c57cc56f-9blt4" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.624807 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d0531246-fb61-45e1-943f-dbba72d91633-socket-dir\") pod \"csi-hostpathplugin-mvvnj\" (UID: \"d0531246-fb61-45e1-943f-dbba72d91633\") " pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.624833 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d0531246-fb61-45e1-943f-dbba72d91633-plugins-dir\") pod \"csi-hostpathplugin-mvvnj\" (UID: \"d0531246-fb61-45e1-943f-dbba72d91633\") " pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.626529 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0623247c-d46a-4e16-8731-cdd6d2f4a16a-bound-sa-token\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.626565 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0623247c-d46a-4e16-8731-cdd6d2f4a16a-registry-tls\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.626621 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45-config\") pod \"etcd-operator-b45778765-7c24l\" (UID: \"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.626643 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h9gw\" (UniqueName: \"kubernetes.io/projected/d0531246-fb61-45e1-943f-dbba72d91633-kube-api-access-8h9gw\") pod \"csi-hostpathplugin-mvvnj\" (UID: \"d0531246-fb61-45e1-943f-dbba72d91633\") " pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.626691 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/19432356-d767-4580-9cec-6366011c203c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-btkdm\" (UID: \"19432356-d767-4580-9cec-6366011c203c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.626727 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.626747 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe9419f1-075a-4031-8efa-f6b2302bece3-config-volume\") pod \"dns-default-g77wg\" (UID: \"fe9419f1-075a-4031-8efa-f6b2302bece3\") " pod="openshift-dns/dns-default-g77wg" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.626766 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0623247c-d46a-4e16-8731-cdd6d2f4a16a-trusted-ca\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.626788 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z44t\" (UniqueName: \"kubernetes.io/projected/1c8d2d2e-13c2-4efe-9012-706047ea21e5-kube-api-access-7z44t\") pod \"migrator-59844c95c7-ttltj\" (UID: \"1c8d2d2e-13c2-4efe-9012-706047ea21e5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ttltj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.626842 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45-etcd-ca\") pod \"etcd-operator-b45778765-7c24l\" (UID: \"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.626878 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.626900 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f677af71-4fb9-41a5-99f4-59800a8de3b7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b479q\" (UID: \"f677af71-4fb9-41a5-99f4-59800a8de3b7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b479q" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.626928 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8rkt\" (UniqueName: \"kubernetes.io/projected/0623247c-d46a-4e16-8731-cdd6d2f4a16a-kube-api-access-p8rkt\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.629235 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.629302 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/833da764-f289-48e0-9321-57c4cab21e41-signing-key\") pod \"service-ca-9c57cc56f-9blt4\" (UID: \"833da764-f289-48e0-9321-57c4cab21e41\") " pod="openshift-service-ca/service-ca-9c57cc56f-9blt4" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.629321 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cd9592cc-918c-4863-a561-61372a85c43f-audit-dir\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.629353 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.634195 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0623247c-d46a-4e16-8731-cdd6d2f4a16a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.635287 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/273c4d4b-6972-435b-9fda-e802384dffd2-stats-auth\") pod \"router-default-5444994796-dz6n5\" (UID: \"273c4d4b-6972-435b-9fda-e802384dffd2\") " pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.635378 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fm6t\" (UniqueName: \"kubernetes.io/projected/6d1107b9-bf5a-45de-a54c-79c38ba041c6-kube-api-access-9fm6t\") pod \"machine-config-server-kl7gk\" (UID: \"6d1107b9-bf5a-45de-a54c-79c38ba041c6\") " pod="openshift-machine-config-operator/machine-config-server-kl7gk" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.635421 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4kg2\" (UniqueName: \"kubernetes.io/projected/fe9419f1-075a-4031-8efa-f6b2302bece3-kube-api-access-z4kg2\") pod \"dns-default-g77wg\" (UID: \"fe9419f1-075a-4031-8efa-f6b2302bece3\") " pod="openshift-dns/dns-default-g77wg" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.635494 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6d1107b9-bf5a-45de-a54c-79c38ba041c6-node-bootstrap-token\") pod \"machine-config-server-kl7gk\" (UID: \"6d1107b9-bf5a-45de-a54c-79c38ba041c6\") " pod="openshift-machine-config-operator/machine-config-server-kl7gk" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.635521 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89d4a423-452d-4b92-927e-38eadd969e03-config-volume\") pod \"collect-profiles-29405505-2mvmw\" (UID: \"89d4a423-452d-4b92-927e-38eadd969e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.635543 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f677af71-4fb9-41a5-99f4-59800a8de3b7-images\") pod \"machine-config-operator-74547568cd-b479q\" (UID: \"f677af71-4fb9-41a5-99f4-59800a8de3b7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b479q" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.635572 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/834d512d-9d01-48e1-a5a7-035d0e68cccd-config\") pod \"kube-apiserver-operator-766d6c64bb-kwjk5\" (UID: \"834d512d-9d01-48e1-a5a7-035d0e68cccd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kwjk5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.635595 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-audit-policies\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.635621 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpkxl\" (UniqueName: \"kubernetes.io/projected/1de8e8de-7aad-4d28-937b-d13eea43e672-kube-api-access-dpkxl\") pod \"kube-storage-version-migrator-operator-b67b599dd-frdgb\" (UID: \"1de8e8de-7aad-4d28-937b-d13eea43e672\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-frdgb" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.635662 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/814e1602-11f5-41ce-be92-9cefbb6dbe78-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2lh2r\" (UID: \"814e1602-11f5-41ce-be92-9cefbb6dbe78\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2lh2r" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.635773 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x5kf\" (UniqueName: \"kubernetes.io/projected/f677af71-4fb9-41a5-99f4-59800a8de3b7-kube-api-access-6x5kf\") pod \"machine-config-operator-74547568cd-b479q\" (UID: \"f677af71-4fb9-41a5-99f4-59800a8de3b7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b479q" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.635806 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/834d512d-9d01-48e1-a5a7-035d0e68cccd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-kwjk5\" (UID: \"834d512d-9d01-48e1-a5a7-035d0e68cccd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kwjk5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.635816 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/273c4d4b-6972-435b-9fda-e802384dffd2-service-ca-bundle\") pod \"router-default-5444994796-dz6n5\" (UID: \"273c4d4b-6972-435b-9fda-e802384dffd2\") " pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.635827 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cf3562d8-1f85-460c-b49a-c2922d803c5a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kt5lf\" (UID: \"cf3562d8-1f85-460c-b49a-c2922d803c5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kt5lf" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.635910 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6d1107b9-bf5a-45de-a54c-79c38ba041c6-certs\") pod \"machine-config-server-kl7gk\" (UID: \"6d1107b9-bf5a-45de-a54c-79c38ba041c6\") " pod="openshift-machine-config-operator/machine-config-server-kl7gk" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.635966 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a9a6f10-2c46-4625-8fc0-8522d9082086-webhook-cert\") pod \"packageserver-d55dfcdfc-ndgk2\" (UID: \"8a9a6f10-2c46-4625-8fc0-8522d9082086\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.635998 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq5wk\" (UniqueName: \"kubernetes.io/projected/8a9a6f10-2c46-4625-8fc0-8522d9082086-kube-api-access-gq5wk\") pod \"packageserver-d55dfcdfc-ndgk2\" (UID: \"8a9a6f10-2c46-4625-8fc0-8522d9082086\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.636043 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e8700055-6a97-470b-93de-aefe1758239b-srv-cert\") pod \"olm-operator-6b444d44fb-dd8jd\" (UID: \"e8700055-6a97-470b-93de-aefe1758239b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dd8jd" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.636091 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a9a6f10-2c46-4625-8fc0-8522d9082086-apiservice-cert\") pod \"packageserver-d55dfcdfc-ndgk2\" (UID: \"8a9a6f10-2c46-4625-8fc0-8522d9082086\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.636130 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fe9419f1-075a-4031-8efa-f6b2302bece3-metrics-tls\") pod \"dns-default-g77wg\" (UID: \"fe9419f1-075a-4031-8efa-f6b2302bece3\") " pod="openshift-dns/dns-default-g77wg" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.636342 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19432356-d767-4580-9cec-6366011c203c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-btkdm\" (UID: \"19432356-d767-4580-9cec-6366011c203c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.639163 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/833da764-f289-48e0-9321-57c4cab21e41-signing-cabundle\") pod \"service-ca-9c57cc56f-9blt4\" (UID: \"833da764-f289-48e0-9321-57c4cab21e41\") " pod="openshift-service-ca/service-ca-9c57cc56f-9blt4" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.640867 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45-config\") pod \"etcd-operator-b45778765-7c24l\" (UID: \"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.640908 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1de8e8de-7aad-4d28-937b-d13eea43e672-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-frdgb\" (UID: \"1de8e8de-7aad-4d28-937b-d13eea43e672\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-frdgb" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.638222 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45-etcd-service-ca\") pod \"etcd-operator-b45778765-7c24l\" (UID: \"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:19 crc kubenswrapper[5030]: E1128 11:55:19.641282 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:20.141261476 +0000 UTC m=+138.083004159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.641361 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0623247c-d46a-4e16-8731-cdd6d2f4a16a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.643490 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45-etcd-client\") pod \"etcd-operator-b45778765-7c24l\" (UID: \"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.644455 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45-serving-cert\") pod \"etcd-operator-b45778765-7c24l\" (UID: \"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.649117 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/834d512d-9d01-48e1-a5a7-035d0e68cccd-config\") pod \"kube-apiserver-operator-766d6c64bb-kwjk5\" (UID: \"834d512d-9d01-48e1-a5a7-035d0e68cccd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kwjk5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.650220 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0623247c-d46a-4e16-8731-cdd6d2f4a16a-registry-certificates\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.650885 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1de8e8de-7aad-4d28-937b-d13eea43e672-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-frdgb\" (UID: \"1de8e8de-7aad-4d28-937b-d13eea43e672\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-frdgb" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.652403 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/273c4d4b-6972-435b-9fda-e802384dffd2-default-certificate\") pod \"router-default-5444994796-dz6n5\" (UID: \"273c4d4b-6972-435b-9fda-e802384dffd2\") " pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.652706 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/19432356-d767-4580-9cec-6366011c203c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-btkdm\" (UID: \"19432356-d767-4580-9cec-6366011c203c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.653331 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/273c4d4b-6972-435b-9fda-e802384dffd2-metrics-certs\") pod \"router-default-5444994796-dz6n5\" (UID: \"273c4d4b-6972-435b-9fda-e802384dffd2\") " pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.654941 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0623247c-d46a-4e16-8731-cdd6d2f4a16a-registry-tls\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.655764 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0623247c-d46a-4e16-8731-cdd6d2f4a16a-trusted-ca\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.656930 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45-etcd-ca\") pod \"etcd-operator-b45778765-7c24l\" (UID: \"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.659577 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxb7x\" (UniqueName: \"kubernetes.io/projected/19432356-d767-4580-9cec-6366011c203c-kube-api-access-fxb7x\") pod \"cluster-image-registry-operator-dc59b4c8b-btkdm\" (UID: \"19432356-d767-4580-9cec-6366011c203c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.664382 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cf3562d8-1f85-460c-b49a-c2922d803c5a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kt5lf\" (UID: \"cf3562d8-1f85-460c-b49a-c2922d803c5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kt5lf" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.664813 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/834d512d-9d01-48e1-a5a7-035d0e68cccd-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-kwjk5\" (UID: \"834d512d-9d01-48e1-a5a7-035d0e68cccd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kwjk5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.665301 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.674293 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.677426 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.677608 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.678677 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-audit-policies\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.681177 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.686547 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.686796 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/834d512d-9d01-48e1-a5a7-035d0e68cccd-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-kwjk5\" (UID: \"834d512d-9d01-48e1-a5a7-035d0e68cccd\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kwjk5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.686914 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.698858 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.699325 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.699961 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.705942 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-84zsn"] Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.706426 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjctg\" (UniqueName: \"kubernetes.io/projected/cf3562d8-1f85-460c-b49a-c2922d803c5a-kube-api-access-wjctg\") pod \"cluster-samples-operator-665b6dd947-kt5lf\" (UID: \"cf3562d8-1f85-460c-b49a-c2922d803c5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kt5lf" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.708374 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.719963 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdbvg\" (UniqueName: \"kubernetes.io/projected/cd9592cc-918c-4863-a561-61372a85c43f-kube-api-access-qdbvg\") pod \"oauth-openshift-558db77b4-456s8\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.720906 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-r2vs9"] Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.735537 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.737720 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0623247c-d46a-4e16-8731-cdd6d2f4a16a-bound-sa-token\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.741715 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.741922 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f677af71-4fb9-41a5-99f4-59800a8de3b7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b479q\" (UID: \"f677af71-4fb9-41a5-99f4-59800a8de3b7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b479q" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.741969 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/833da764-f289-48e0-9321-57c4cab21e41-signing-key\") pod \"service-ca-9c57cc56f-9blt4\" (UID: \"833da764-f289-48e0-9321-57c4cab21e41\") " pod="openshift-service-ca/service-ca-9c57cc56f-9blt4" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.741998 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fm6t\" (UniqueName: \"kubernetes.io/projected/6d1107b9-bf5a-45de-a54c-79c38ba041c6-kube-api-access-9fm6t\") pod \"machine-config-server-kl7gk\" (UID: \"6d1107b9-bf5a-45de-a54c-79c38ba041c6\") " pod="openshift-machine-config-operator/machine-config-server-kl7gk" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742015 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4kg2\" (UniqueName: \"kubernetes.io/projected/fe9419f1-075a-4031-8efa-f6b2302bece3-kube-api-access-z4kg2\") pod \"dns-default-g77wg\" (UID: \"fe9419f1-075a-4031-8efa-f6b2302bece3\") " pod="openshift-dns/dns-default-g77wg" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742034 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6d1107b9-bf5a-45de-a54c-79c38ba041c6-node-bootstrap-token\") pod \"machine-config-server-kl7gk\" (UID: \"6d1107b9-bf5a-45de-a54c-79c38ba041c6\") " pod="openshift-machine-config-operator/machine-config-server-kl7gk" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742054 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89d4a423-452d-4b92-927e-38eadd969e03-config-volume\") pod \"collect-profiles-29405505-2mvmw\" (UID: \"89d4a423-452d-4b92-927e-38eadd969e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742071 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f677af71-4fb9-41a5-99f4-59800a8de3b7-images\") pod \"machine-config-operator-74547568cd-b479q\" (UID: \"f677af71-4fb9-41a5-99f4-59800a8de3b7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b479q" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742095 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/814e1602-11f5-41ce-be92-9cefbb6dbe78-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2lh2r\" (UID: \"814e1602-11f5-41ce-be92-9cefbb6dbe78\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2lh2r" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742113 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x5kf\" (UniqueName: \"kubernetes.io/projected/f677af71-4fb9-41a5-99f4-59800a8de3b7-kube-api-access-6x5kf\") pod \"machine-config-operator-74547568cd-b479q\" (UID: \"f677af71-4fb9-41a5-99f4-59800a8de3b7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b479q" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742135 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6d1107b9-bf5a-45de-a54c-79c38ba041c6-certs\") pod \"machine-config-server-kl7gk\" (UID: \"6d1107b9-bf5a-45de-a54c-79c38ba041c6\") " pod="openshift-machine-config-operator/machine-config-server-kl7gk" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742155 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a9a6f10-2c46-4625-8fc0-8522d9082086-webhook-cert\") pod \"packageserver-d55dfcdfc-ndgk2\" (UID: \"8a9a6f10-2c46-4625-8fc0-8522d9082086\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742174 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq5wk\" (UniqueName: \"kubernetes.io/projected/8a9a6f10-2c46-4625-8fc0-8522d9082086-kube-api-access-gq5wk\") pod \"packageserver-d55dfcdfc-ndgk2\" (UID: \"8a9a6f10-2c46-4625-8fc0-8522d9082086\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742192 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e8700055-6a97-470b-93de-aefe1758239b-srv-cert\") pod \"olm-operator-6b444d44fb-dd8jd\" (UID: \"e8700055-6a97-470b-93de-aefe1758239b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dd8jd" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742211 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a9a6f10-2c46-4625-8fc0-8522d9082086-apiservice-cert\") pod \"packageserver-d55dfcdfc-ndgk2\" (UID: \"8a9a6f10-2c46-4625-8fc0-8522d9082086\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742229 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fe9419f1-075a-4031-8efa-f6b2302bece3-metrics-tls\") pod \"dns-default-g77wg\" (UID: \"fe9419f1-075a-4031-8efa-f6b2302bece3\") " pod="openshift-dns/dns-default-g77wg" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742244 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/833da764-f289-48e0-9321-57c4cab21e41-signing-cabundle\") pod \"service-ca-9c57cc56f-9blt4\" (UID: \"833da764-f289-48e0-9321-57c4cab21e41\") " pod="openshift-service-ca/service-ca-9c57cc56f-9blt4" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742264 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2952p\" (UniqueName: \"kubernetes.io/projected/3c3795a8-94c8-4eee-9791-f18e22d36c09-kube-api-access-2952p\") pod \"service-ca-operator-777779d784-xk2p8\" (UID: \"3c3795a8-94c8-4eee-9791-f18e22d36c09\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2p8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742283 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d1425f4-1e94-443c-bb47-1a473f584069-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jjxhn\" (UID: \"4d1425f4-1e94-443c-bb47-1a473f584069\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jjxhn" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742308 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mqgh\" (UniqueName: \"kubernetes.io/projected/e8700055-6a97-470b-93de-aefe1758239b-kube-api-access-2mqgh\") pod \"olm-operator-6b444d44fb-dd8jd\" (UID: \"e8700055-6a97-470b-93de-aefe1758239b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dd8jd" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742365 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rbg9\" (UniqueName: \"kubernetes.io/projected/235ffe06-65ea-4f0e-90b8-1b9ed56df5bf-kube-api-access-8rbg9\") pod \"marketplace-operator-79b997595-frtvx\" (UID: \"235ffe06-65ea-4f0e-90b8-1b9ed56df5bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742388 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d0531246-fb61-45e1-943f-dbba72d91633-csi-data-dir\") pod \"csi-hostpathplugin-mvvnj\" (UID: \"d0531246-fb61-45e1-943f-dbba72d91633\") " pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742414 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f677af71-4fb9-41a5-99f4-59800a8de3b7-proxy-tls\") pod \"machine-config-operator-74547568cd-b479q\" (UID: \"f677af71-4fb9-41a5-99f4-59800a8de3b7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b479q" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742435 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/235ffe06-65ea-4f0e-90b8-1b9ed56df5bf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-frtvx\" (UID: \"235ffe06-65ea-4f0e-90b8-1b9ed56df5bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742453 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d1425f4-1e94-443c-bb47-1a473f584069-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jjxhn\" (UID: \"4d1425f4-1e94-443c-bb47-1a473f584069\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jjxhn" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742470 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d1425f4-1e94-443c-bb47-1a473f584069-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jjxhn\" (UID: \"4d1425f4-1e94-443c-bb47-1a473f584069\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jjxhn" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742505 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62975b1e-d898-42d0-8f46-27c47287d53b-srv-cert\") pod \"catalog-operator-68c6474976-wkwgz\" (UID: \"62975b1e-d898-42d0-8f46-27c47287d53b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742534 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d0531246-fb61-45e1-943f-dbba72d91633-registration-dir\") pod \"csi-hostpathplugin-mvvnj\" (UID: \"d0531246-fb61-45e1-943f-dbba72d91633\") " pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742554 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97wj2\" (UniqueName: \"kubernetes.io/projected/f362eedc-734d-4cfd-831c-6dedca53f975-kube-api-access-97wj2\") pod \"multus-admission-controller-857f4d67dd-4f6gt\" (UID: \"f362eedc-734d-4cfd-831c-6dedca53f975\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4f6gt" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742577 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lczk6\" (UniqueName: \"kubernetes.io/projected/f79ac060-fa0b-487f-a959-90da3f7e1fa5-kube-api-access-lczk6\") pod \"ingress-canary-jhlzs\" (UID: \"f79ac060-fa0b-487f-a959-90da3f7e1fa5\") " pod="openshift-ingress-canary/ingress-canary-jhlzs" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742596 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62975b1e-d898-42d0-8f46-27c47287d53b-profile-collector-cert\") pod \"catalog-operator-68c6474976-wkwgz\" (UID: \"62975b1e-d898-42d0-8f46-27c47287d53b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742623 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/235ffe06-65ea-4f0e-90b8-1b9ed56df5bf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-frtvx\" (UID: \"235ffe06-65ea-4f0e-90b8-1b9ed56df5bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742639 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a9a6f10-2c46-4625-8fc0-8522d9082086-tmpfs\") pod \"packageserver-d55dfcdfc-ndgk2\" (UID: \"8a9a6f10-2c46-4625-8fc0-8522d9082086\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742657 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmr4b\" (UniqueName: \"kubernetes.io/projected/62975b1e-d898-42d0-8f46-27c47287d53b-kube-api-access-vmr4b\") pod \"catalog-operator-68c6474976-wkwgz\" (UID: \"62975b1e-d898-42d0-8f46-27c47287d53b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742674 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d0531246-fb61-45e1-943f-dbba72d91633-mountpoint-dir\") pod \"csi-hostpathplugin-mvvnj\" (UID: \"d0531246-fb61-45e1-943f-dbba72d91633\") " pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742692 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c3795a8-94c8-4eee-9791-f18e22d36c09-serving-cert\") pod \"service-ca-operator-777779d784-xk2p8\" (UID: \"3c3795a8-94c8-4eee-9791-f18e22d36c09\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2p8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742708 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89d4a423-452d-4b92-927e-38eadd969e03-secret-volume\") pod \"collect-profiles-29405505-2mvmw\" (UID: \"89d4a423-452d-4b92-927e-38eadd969e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742726 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e8700055-6a97-470b-93de-aefe1758239b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dd8jd\" (UID: \"e8700055-6a97-470b-93de-aefe1758239b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dd8jd" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742745 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c3795a8-94c8-4eee-9791-f18e22d36c09-config\") pod \"service-ca-operator-777779d784-xk2p8\" (UID: \"3c3795a8-94c8-4eee-9791-f18e22d36c09\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2p8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742770 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns7vs\" (UniqueName: \"kubernetes.io/projected/89d4a423-452d-4b92-927e-38eadd969e03-kube-api-access-ns7vs\") pod \"collect-profiles-29405505-2mvmw\" (UID: \"89d4a423-452d-4b92-927e-38eadd969e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742800 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f362eedc-734d-4cfd-831c-6dedca53f975-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4f6gt\" (UID: \"f362eedc-734d-4cfd-831c-6dedca53f975\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4f6gt" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742819 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7gv4\" (UniqueName: \"kubernetes.io/projected/814e1602-11f5-41ce-be92-9cefbb6dbe78-kube-api-access-l7gv4\") pod \"package-server-manager-789f6589d5-2lh2r\" (UID: \"814e1602-11f5-41ce-be92-9cefbb6dbe78\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2lh2r" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742835 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f79ac060-fa0b-487f-a959-90da3f7e1fa5-cert\") pod \"ingress-canary-jhlzs\" (UID: \"f79ac060-fa0b-487f-a959-90da3f7e1fa5\") " pod="openshift-ingress-canary/ingress-canary-jhlzs" Nov 28 11:55:19 crc kubenswrapper[5030]: E1128 11:55:19.742873 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:20.242843218 +0000 UTC m=+138.184586071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742935 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76z9n\" (UniqueName: \"kubernetes.io/projected/833da764-f289-48e0-9321-57c4cab21e41-kube-api-access-76z9n\") pod \"service-ca-9c57cc56f-9blt4\" (UID: \"833da764-f289-48e0-9321-57c4cab21e41\") " pod="openshift-service-ca/service-ca-9c57cc56f-9blt4" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.742976 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d0531246-fb61-45e1-943f-dbba72d91633-socket-dir\") pod \"csi-hostpathplugin-mvvnj\" (UID: \"d0531246-fb61-45e1-943f-dbba72d91633\") " pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.743010 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d0531246-fb61-45e1-943f-dbba72d91633-plugins-dir\") pod \"csi-hostpathplugin-mvvnj\" (UID: \"d0531246-fb61-45e1-943f-dbba72d91633\") " pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.743059 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h9gw\" (UniqueName: \"kubernetes.io/projected/d0531246-fb61-45e1-943f-dbba72d91633-kube-api-access-8h9gw\") pod \"csi-hostpathplugin-mvvnj\" (UID: \"d0531246-fb61-45e1-943f-dbba72d91633\") " pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.743113 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe9419f1-075a-4031-8efa-f6b2302bece3-config-volume\") pod \"dns-default-g77wg\" (UID: \"fe9419f1-075a-4031-8efa-f6b2302bece3\") " pod="openshift-dns/dns-default-g77wg" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.743150 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z44t\" (UniqueName: \"kubernetes.io/projected/1c8d2d2e-13c2-4efe-9012-706047ea21e5-kube-api-access-7z44t\") pod \"migrator-59844c95c7-ttltj\" (UID: \"1c8d2d2e-13c2-4efe-9012-706047ea21e5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ttltj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.743703 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d0531246-fb61-45e1-943f-dbba72d91633-registration-dir\") pod \"csi-hostpathplugin-mvvnj\" (UID: \"d0531246-fb61-45e1-943f-dbba72d91633\") " pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.746304 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d1425f4-1e94-443c-bb47-1a473f584069-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jjxhn\" (UID: \"4d1425f4-1e94-443c-bb47-1a473f584069\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jjxhn" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.749892 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d0531246-fb61-45e1-943f-dbba72d91633-csi-data-dir\") pod \"csi-hostpathplugin-mvvnj\" (UID: \"d0531246-fb61-45e1-943f-dbba72d91633\") " pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.751178 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d0531246-fb61-45e1-943f-dbba72d91633-mountpoint-dir\") pod \"csi-hostpathplugin-mvvnj\" (UID: \"d0531246-fb61-45e1-943f-dbba72d91633\") " pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.755610 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/235ffe06-65ea-4f0e-90b8-1b9ed56df5bf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-frtvx\" (UID: \"235ffe06-65ea-4f0e-90b8-1b9ed56df5bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.757065 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f677af71-4fb9-41a5-99f4-59800a8de3b7-proxy-tls\") pod \"machine-config-operator-74547568cd-b479q\" (UID: \"f677af71-4fb9-41a5-99f4-59800a8de3b7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b479q" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.757637 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d1425f4-1e94-443c-bb47-1a473f584069-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jjxhn\" (UID: \"4d1425f4-1e94-443c-bb47-1a473f584069\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jjxhn" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.758918 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/833da764-f289-48e0-9321-57c4cab21e41-signing-cabundle\") pod \"service-ca-9c57cc56f-9blt4\" (UID: \"833da764-f289-48e0-9321-57c4cab21e41\") " pod="openshift-service-ca/service-ca-9c57cc56f-9blt4" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.759429 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a9a6f10-2c46-4625-8fc0-8522d9082086-tmpfs\") pod \"packageserver-d55dfcdfc-ndgk2\" (UID: \"8a9a6f10-2c46-4625-8fc0-8522d9082086\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.761217 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq"] Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.761261 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bdbjw"] Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.763547 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d0531246-fb61-45e1-943f-dbba72d91633-plugins-dir\") pod \"csi-hostpathplugin-mvvnj\" (UID: \"d0531246-fb61-45e1-943f-dbba72d91633\") " pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.763731 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d0531246-fb61-45e1-943f-dbba72d91633-socket-dir\") pod \"csi-hostpathplugin-mvvnj\" (UID: \"d0531246-fb61-45e1-943f-dbba72d91633\") " pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.763893 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f677af71-4fb9-41a5-99f4-59800a8de3b7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b479q\" (UID: \"f677af71-4fb9-41a5-99f4-59800a8de3b7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b479q" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.764278 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89d4a423-452d-4b92-927e-38eadd969e03-config-volume\") pod \"collect-profiles-29405505-2mvmw\" (UID: \"89d4a423-452d-4b92-927e-38eadd969e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.765210 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c3795a8-94c8-4eee-9791-f18e22d36c09-config\") pod \"service-ca-operator-777779d784-xk2p8\" (UID: \"3c3795a8-94c8-4eee-9791-f18e22d36c09\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2p8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.765325 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f677af71-4fb9-41a5-99f4-59800a8de3b7-images\") pod \"machine-config-operator-74547568cd-b479q\" (UID: \"f677af71-4fb9-41a5-99f4-59800a8de3b7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b479q" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.765429 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62975b1e-d898-42d0-8f46-27c47287d53b-profile-collector-cert\") pod \"catalog-operator-68c6474976-wkwgz\" (UID: \"62975b1e-d898-42d0-8f46-27c47287d53b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.767488 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/833da764-f289-48e0-9321-57c4cab21e41-signing-key\") pod \"service-ca-9c57cc56f-9blt4\" (UID: \"833da764-f289-48e0-9321-57c4cab21e41\") " pod="openshift-service-ca/service-ca-9c57cc56f-9blt4" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.769697 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe9419f1-075a-4031-8efa-f6b2302bece3-config-volume\") pod \"dns-default-g77wg\" (UID: \"fe9419f1-075a-4031-8efa-f6b2302bece3\") " pod="openshift-dns/dns-default-g77wg" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.769852 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e8700055-6a97-470b-93de-aefe1758239b-srv-cert\") pod \"olm-operator-6b444d44fb-dd8jd\" (UID: \"e8700055-6a97-470b-93de-aefe1758239b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dd8jd" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.772086 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a9a6f10-2c46-4625-8fc0-8522d9082086-apiservice-cert\") pod \"packageserver-d55dfcdfc-ndgk2\" (UID: \"8a9a6f10-2c46-4625-8fc0-8522d9082086\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.773636 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62975b1e-d898-42d0-8f46-27c47287d53b-srv-cert\") pod \"catalog-operator-68c6474976-wkwgz\" (UID: \"62975b1e-d898-42d0-8f46-27c47287d53b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.774384 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fe9419f1-075a-4031-8efa-f6b2302bece3-metrics-tls\") pod \"dns-default-g77wg\" (UID: \"fe9419f1-075a-4031-8efa-f6b2302bece3\") " pod="openshift-dns/dns-default-g77wg" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.774879 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kt5lf" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.775296 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47vf7"] Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.776923 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6d1107b9-bf5a-45de-a54c-79c38ba041c6-node-bootstrap-token\") pod \"machine-config-server-kl7gk\" (UID: \"6d1107b9-bf5a-45de-a54c-79c38ba041c6\") " pod="openshift-machine-config-operator/machine-config-server-kl7gk" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.777975 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e8700055-6a97-470b-93de-aefe1758239b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dd8jd\" (UID: \"e8700055-6a97-470b-93de-aefe1758239b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dd8jd" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.785071 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f79ac060-fa0b-487f-a959-90da3f7e1fa5-cert\") pod \"ingress-canary-jhlzs\" (UID: \"f79ac060-fa0b-487f-a959-90da3f7e1fa5\") " pod="openshift-ingress-canary/ingress-canary-jhlzs" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.792248 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6d1107b9-bf5a-45de-a54c-79c38ba041c6-certs\") pod \"machine-config-server-kl7gk\" (UID: \"6d1107b9-bf5a-45de-a54c-79c38ba041c6\") " pod="openshift-machine-config-operator/machine-config-server-kl7gk" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.792247 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/235ffe06-65ea-4f0e-90b8-1b9ed56df5bf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-frtvx\" (UID: \"235ffe06-65ea-4f0e-90b8-1b9ed56df5bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.793123 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19432356-d767-4580-9cec-6366011c203c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-btkdm\" (UID: \"19432356-d767-4580-9cec-6366011c203c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.793204 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/814e1602-11f5-41ce-be92-9cefbb6dbe78-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2lh2r\" (UID: \"814e1602-11f5-41ce-be92-9cefbb6dbe78\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2lh2r" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.793542 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89d4a423-452d-4b92-927e-38eadd969e03-secret-volume\") pod \"collect-profiles-29405505-2mvmw\" (UID: \"89d4a423-452d-4b92-927e-38eadd969e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.793693 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f362eedc-734d-4cfd-831c-6dedca53f975-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4f6gt\" (UID: \"f362eedc-734d-4cfd-831c-6dedca53f975\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4f6gt" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.793733 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c3795a8-94c8-4eee-9791-f18e22d36c09-serving-cert\") pod \"service-ca-operator-777779d784-xk2p8\" (UID: \"3c3795a8-94c8-4eee-9791-f18e22d36c09\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2p8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.794397 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a9a6f10-2c46-4625-8fc0-8522d9082086-webhook-cert\") pod \"packageserver-d55dfcdfc-ndgk2\" (UID: \"8a9a6f10-2c46-4625-8fc0-8522d9082086\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.800056 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2ssx\" (UniqueName: \"kubernetes.io/projected/75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45-kube-api-access-g2ssx\") pod \"etcd-operator-b45778765-7c24l\" (UID: \"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.807112 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zv8vs"] Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.809297 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpkxl\" (UniqueName: \"kubernetes.io/projected/1de8e8de-7aad-4d28-937b-d13eea43e672-kube-api-access-dpkxl\") pod \"kube-storage-version-migrator-operator-b67b599dd-frdgb\" (UID: \"1de8e8de-7aad-4d28-937b-d13eea43e672\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-frdgb" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.826047 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh8sj\" (UniqueName: \"kubernetes.io/projected/273c4d4b-6972-435b-9fda-e802384dffd2-kube-api-access-dh8sj\") pod \"router-default-5444994796-dz6n5\" (UID: \"273c4d4b-6972-435b-9fda-e802384dffd2\") " pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.845948 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: E1128 11:55:19.848403 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:20.347116507 +0000 UTC m=+138.288859190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.852577 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68"] Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.861641 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8rkt\" (UniqueName: \"kubernetes.io/projected/0623247c-d46a-4e16-8731-cdd6d2f4a16a-kube-api-access-p8rkt\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.874234 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq5wk\" (UniqueName: \"kubernetes.io/projected/8a9a6f10-2c46-4625-8fc0-8522d9082086-kube-api-access-gq5wk\") pod \"packageserver-d55dfcdfc-ndgk2\" (UID: \"8a9a6f10-2c46-4625-8fc0-8522d9082086\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.890529 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z44t\" (UniqueName: \"kubernetes.io/projected/1c8d2d2e-13c2-4efe-9012-706047ea21e5-kube-api-access-7z44t\") pod \"migrator-59844c95c7-ttltj\" (UID: \"1c8d2d2e-13c2-4efe-9012-706047ea21e5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ttltj" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.898894 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.920432 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mqgh\" (UniqueName: \"kubernetes.io/projected/e8700055-6a97-470b-93de-aefe1758239b-kube-api-access-2mqgh\") pod \"olm-operator-6b444d44fb-dd8jd\" (UID: \"e8700055-6a97-470b-93de-aefe1758239b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dd8jd" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.926150 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kwjk5" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.930190 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rbg9\" (UniqueName: \"kubernetes.io/projected/235ffe06-65ea-4f0e-90b8-1b9ed56df5bf-kube-api-access-8rbg9\") pod \"marketplace-operator-79b997595-frtvx\" (UID: \"235ffe06-65ea-4f0e-90b8-1b9ed56df5bf\") " pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.948455 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:19 crc kubenswrapper[5030]: E1128 11:55:19.949101 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:20.449079901 +0000 UTC m=+138.390822584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.963256 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2952p\" (UniqueName: \"kubernetes.io/projected/3c3795a8-94c8-4eee-9791-f18e22d36c09-kube-api-access-2952p\") pod \"service-ca-operator-777779d784-xk2p8\" (UID: \"3c3795a8-94c8-4eee-9791-f18e22d36c09\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2p8" Nov 28 11:55:19 crc kubenswrapper[5030]: I1128 11:55:19.973366 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d1425f4-1e94-443c-bb47-1a473f584069-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jjxhn\" (UID: \"4d1425f4-1e94-443c-bb47-1a473f584069\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jjxhn" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.001948 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-26fxd"] Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.006833 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97wj2\" (UniqueName: \"kubernetes.io/projected/f362eedc-734d-4cfd-831c-6dedca53f975-kube-api-access-97wj2\") pod \"multus-admission-controller-857f4d67dd-4f6gt\" (UID: \"f362eedc-734d-4cfd-831c-6dedca53f975\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4f6gt" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.014857 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm" Nov 28 11:55:20 crc kubenswrapper[5030]: W1128 11:55:20.027161 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfeba0e47_9667_44da_ab70_50346b203fa6.slice/crio-b629715b03e28e764efbcfc5503413f1e08f7c556ae937f5531dc3aaa9811800 WatchSource:0}: Error finding container b629715b03e28e764efbcfc5503413f1e08f7c556ae937f5531dc3aaa9811800: Status 404 returned error can't find the container with id b629715b03e28e764efbcfc5503413f1e08f7c556ae937f5531dc3aaa9811800 Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.027577 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.030432 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lczk6\" (UniqueName: \"kubernetes.io/projected/f79ac060-fa0b-487f-a959-90da3f7e1fa5-kube-api-access-lczk6\") pod \"ingress-canary-jhlzs\" (UID: \"f79ac060-fa0b-487f-a959-90da3f7e1fa5\") " pod="openshift-ingress-canary/ingress-canary-jhlzs" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.032515 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmr4b\" (UniqueName: \"kubernetes.io/projected/62975b1e-d898-42d0-8f46-27c47287d53b-kube-api-access-vmr4b\") pod \"catalog-operator-68c6474976-wkwgz\" (UID: \"62975b1e-d898-42d0-8f46-27c47287d53b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz" Nov 28 11:55:20 crc kubenswrapper[5030]: W1128 11:55:20.036297 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3073870f_c73c_4fcd_8dbf_e8c210aaa197.slice/crio-9d5a8e3041f540566438cf994afff514d8781ce41ecc53fcbb84ef400e134035 WatchSource:0}: Error finding container 9d5a8e3041f540566438cf994afff514d8781ce41ecc53fcbb84ef400e134035: Status 404 returned error can't find the container with id 9d5a8e3041f540566438cf994afff514d8781ce41ecc53fcbb84ef400e134035 Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.048168 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fm6t\" (UniqueName: \"kubernetes.io/projected/6d1107b9-bf5a-45de-a54c-79c38ba041c6-kube-api-access-9fm6t\") pod \"machine-config-server-kl7gk\" (UID: \"6d1107b9-bf5a-45de-a54c-79c38ba041c6\") " pod="openshift-machine-config-operator/machine-config-server-kl7gk" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.050221 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:20 crc kubenswrapper[5030]: E1128 11:55:20.050931 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:20.55091626 +0000 UTC m=+138.492658943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.053028 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-frdgb" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.068871 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76z9n\" (UniqueName: \"kubernetes.io/projected/833da764-f289-48e0-9321-57c4cab21e41-kube-api-access-76z9n\") pod \"service-ca-9c57cc56f-9blt4\" (UID: \"833da764-f289-48e0-9321-57c4cab21e41\") " pod="openshift-service-ca/service-ca-9c57cc56f-9blt4" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.074898 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lr5mq"] Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.097146 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4kg2\" (UniqueName: \"kubernetes.io/projected/fe9419f1-075a-4031-8efa-f6b2302bece3-kube-api-access-z4kg2\") pod \"dns-default-g77wg\" (UID: \"fe9419f1-075a-4031-8efa-f6b2302bece3\") " pod="openshift-dns/dns-default-g77wg" Nov 28 11:55:20 crc kubenswrapper[5030]: W1128 11:55:20.097820 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca2907d2_9fad_41b4_b625_19e05e2884c5.slice/crio-235f8aa4f87e4ee5bab6bc47857c9a7ef8086059e640589ccf7b0f19378b41b6 WatchSource:0}: Error finding container 235f8aa4f87e4ee5bab6bc47857c9a7ef8086059e640589ccf7b0f19378b41b6: Status 404 returned error can't find the container with id 235f8aa4f87e4ee5bab6bc47857c9a7ef8086059e640589ccf7b0f19378b41b6 Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.103630 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ttltj" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.109599 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jjxhn" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.123582 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h9gw\" (UniqueName: \"kubernetes.io/projected/d0531246-fb61-45e1-943f-dbba72d91633-kube-api-access-8h9gw\") pod \"csi-hostpathplugin-mvvnj\" (UID: \"d0531246-fb61-45e1-943f-dbba72d91633\") " pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.123814 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4f6gt" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.125523 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4r4f7"] Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.126227 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dd8jd" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.130990 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2p8" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.138254 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.144388 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.148989 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x5kf\" (UniqueName: \"kubernetes.io/projected/f677af71-4fb9-41a5-99f4-59800a8de3b7-kube-api-access-6x5kf\") pod \"machine-config-operator-74547568cd-b479q\" (UID: \"f677af71-4fb9-41a5-99f4-59800a8de3b7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b479q" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.152845 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:20 crc kubenswrapper[5030]: E1128 11:55:20.153015 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:20.652988487 +0000 UTC m=+138.594731170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:20 crc kubenswrapper[5030]: E1128 11:55:20.155976 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:20.655954879 +0000 UTC m=+138.597697562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.157141 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.161582 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7gv4\" (UniqueName: \"kubernetes.io/projected/814e1602-11f5-41ce-be92-9cefbb6dbe78-kube-api-access-l7gv4\") pod \"package-server-manager-789f6589d5-2lh2r\" (UID: \"814e1602-11f5-41ce-be92-9cefbb6dbe78\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2lh2r" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.161686 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.163125 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns7vs\" (UniqueName: \"kubernetes.io/projected/89d4a423-452d-4b92-927e-38eadd969e03-kube-api-access-ns7vs\") pod \"collect-profiles-29405505-2mvmw\" (UID: \"89d4a423-452d-4b92-927e-38eadd969e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.164637 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pvtql"] Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.176113 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.182942 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2lh2r" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.183220 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9blt4" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.189992 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-g77wg" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.199195 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jhlzs" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.231874 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.237065 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kl7gk" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.254271 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l6ggh"] Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.260889 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" event={"ID":"5733d243-c607-42f6-b76a-a4852d2771ff","Type":"ContainerStarted","Data":"6db527bcae2a4971eeccdecd8ff4dc49057d5c8b80e71181f474d1932738efe2"} Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.261752 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" event={"ID":"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d","Type":"ContainerStarted","Data":"8a101627a8a01ce4734639716d27917246c77f37c3d6ee1ea767f4595b205a8d"} Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.263230 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.263598 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-84zsn" event={"ID":"0e7af0be-101e-4d83-92ab-c88b3cf47a55","Type":"ContainerStarted","Data":"75fcefbc3c37086db72874bb83e907b064a6d29a90d8a2bbd21ae2ebe34f1dd2"} Nov 28 11:55:20 crc kubenswrapper[5030]: E1128 11:55:20.263795 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:20.763766396 +0000 UTC m=+138.705509079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.263946 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:20 crc kubenswrapper[5030]: E1128 11:55:20.264411 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:20.764400294 +0000 UTC m=+138.706142977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.266762 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" event={"ID":"49559462-c755-4be6-8277-c8cc20aeb0e0","Type":"ContainerStarted","Data":"df1c3bf5c0da889f964e3956ba7bc9825aede060c68b9dcfec47b626f214729b"} Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.266790 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" event={"ID":"49559462-c755-4be6-8277-c8cc20aeb0e0","Type":"ContainerStarted","Data":"0d16d752196f4427bca0f48b922ffac81a5f58c9f76fb480331a0d9c4a63ea48"} Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.267817 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.269144 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26fxd" event={"ID":"ca2907d2-9fad-41b4-b625-19e05e2884c5","Type":"ContainerStarted","Data":"235f8aa4f87e4ee5bab6bc47857c9a7ef8086059e640589ccf7b0f19378b41b6"} Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.271086 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47vf7" event={"ID":"d077d777-7c83-42d3-9c90-b9155040a1ea","Type":"ContainerStarted","Data":"f15b002001dc85b619d498a4a6bb9be90a584c933c1b68675ba206ad56c14669"} Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.272568 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zv8vs" event={"ID":"3073870f-c73c-4fcd-8dbf-e8c210aaa197","Type":"ContainerStarted","Data":"9d5a8e3041f540566438cf994afff514d8781ce41ecc53fcbb84ef400e134035"} Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.285214 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bdbjw" event={"ID":"a5c19601-52c5-40bd-8640-3fd0128e7b6a","Type":"ContainerStarted","Data":"8290445568c43711aea7f6d18dd0716afd7f714a5a26519278b117cf46164688"} Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.292603 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" event={"ID":"4310c2c4-4ad9-4820-abc9-09f761fa3a71","Type":"ContainerStarted","Data":"3f56bf7efba32d22f1009ade22f4266b4a203e97f7ea55a6336983d5aa98391c"} Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.292648 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.292663 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" event={"ID":"4310c2c4-4ad9-4820-abc9-09f761fa3a71","Type":"ContainerStarted","Data":"1f4a7abeb5be2a94b3e6ef13dddd7bebc9225aeb2256eb4da034d6f48dd4502a"} Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.294092 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.296947 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5xdv" event={"ID":"19e32b00-1659-4841-b343-d23e28700081","Type":"ContainerStarted","Data":"22b70fe141d578c862d9bc866b06fbb43e94934eb8f7e0a30a676b9db725af83"} Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.308639 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.311201 5030 generic.go:334] "Generic (PLEG): container finished" podID="7014aabc-8352-44c9-964a-30fdbbcb47d9" containerID="e79bf30a1a362e16524b91f5412543c06018d69e0360cedd80d23e3357d8bb5a" exitCode=0 Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.311368 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq" event={"ID":"7014aabc-8352-44c9-964a-30fdbbcb47d9","Type":"ContainerDied","Data":"e79bf30a1a362e16524b91f5412543c06018d69e0360cedd80d23e3357d8bb5a"} Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.311444 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq" event={"ID":"7014aabc-8352-44c9-964a-30fdbbcb47d9","Type":"ContainerStarted","Data":"e11c24a9463b149b6bcff9afa2615cdd92beaaee4ee4bb3d1d40651e08c6592a"} Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.313037 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lr5mq" event={"ID":"0d09d33a-6040-4fd1-85a5-ac3a1ca5a913","Type":"ContainerStarted","Data":"2319b9562c8333f9d9a1f4aa0f632f081f464431a6faa1922d55b4b2690a5c11"} Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.320094 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68" event={"ID":"feba0e47-9667-44da-ab70-50346b203fa6","Type":"ContainerStarted","Data":"b629715b03e28e764efbcfc5503413f1e08f7c556ae937f5531dc3aaa9811800"} Nov 28 11:55:20 crc kubenswrapper[5030]: W1128 11:55:20.323602 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d2a9f2a_efa6_4d3a_b9ec_2d4b40376fc7.slice/crio-6cba65b5c78277951cdb3e138c28a3d33b75316460573e24f670dc4ace33a72a WatchSource:0}: Error finding container 6cba65b5c78277951cdb3e138c28a3d33b75316460573e24f670dc4ace33a72a: Status 404 returned error can't find the container with id 6cba65b5c78277951cdb3e138c28a3d33b75316460573e24f670dc4ace33a72a Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.332091 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" event={"ID":"95d6b274-7def-4790-b0ab-bae4d0f8d6db","Type":"ContainerStarted","Data":"ba60b5f77967f8af7e56c717e457e9c1d306e857c27731a8ce6a1353f1f3d6bd"} Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.335340 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6gzzl" event={"ID":"8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5","Type":"ContainerStarted","Data":"382fa1d6d019edd78ce6c1b7ede70cc95949756a0762f5acedf15dc080c4d5ab"} Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.335395 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6gzzl" event={"ID":"8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5","Type":"ContainerStarted","Data":"d9a63f4957a291d425afaa84d3d6d75eff782f215019c2346420350253fcd105"} Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.336477 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6gzzl" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.346383 5030 patch_prober.go:28] interesting pod/console-operator-58897d9998-6gzzl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.346436 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6gzzl" podUID="8c1f0bb2-d0bd-4eb6-a4bf-82947f662db5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.365110 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:20 crc kubenswrapper[5030]: E1128 11:55:20.368181 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:20.868159247 +0000 UTC m=+138.809901930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.387885 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b479q" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.417096 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-456s8"] Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.468811 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:20 crc kubenswrapper[5030]: E1128 11:55:20.473628 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:20.973607468 +0000 UTC m=+138.915350351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.480195 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-6gzzl" podStartSLOduration=119.48013761 podStartE2EDuration="1m59.48013761s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:20.466719876 +0000 UTC m=+138.408462559" watchObservedRunningTime="2025-11-28 11:55:20.48013761 +0000 UTC m=+138.421880293" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.531082 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kt5lf"] Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.553018 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" podStartSLOduration=119.552992854 podStartE2EDuration="1m59.552992854s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:20.552107399 +0000 UTC m=+138.493850082" watchObservedRunningTime="2025-11-28 11:55:20.552992854 +0000 UTC m=+138.494735537" Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.557334 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kwjk5"] Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.583322 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:20 crc kubenswrapper[5030]: E1128 11:55:20.583971 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:21.083927394 +0000 UTC m=+139.025670077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:20 crc kubenswrapper[5030]: W1128 11:55:20.678112 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod273c4d4b_6972_435b_9fda_e802384dffd2.slice/crio-d6aa3612c33dc23879e4d6fa99e3bfe28220deed5b7f0570c08dc14f73a8238b WatchSource:0}: Error finding container d6aa3612c33dc23879e4d6fa99e3bfe28220deed5b7f0570c08dc14f73a8238b: Status 404 returned error can't find the container with id d6aa3612c33dc23879e4d6fa99e3bfe28220deed5b7f0570c08dc14f73a8238b Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.686274 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:20 crc kubenswrapper[5030]: E1128 11:55:20.686594 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:21.186582797 +0000 UTC m=+139.128325480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.726309 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dd8jd"] Nov 28 11:55:20 crc kubenswrapper[5030]: W1128 11:55:20.744340 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod834d512d_9d01_48e1_a5a7_035d0e68cccd.slice/crio-d9f7f44bf541bdee2cfce31b21bb9dce1054a6f814e6a833165f37cbd4af49d4 WatchSource:0}: Error finding container d9f7f44bf541bdee2cfce31b21bb9dce1054a6f814e6a833165f37cbd4af49d4: Status 404 returned error can't find the container with id d9f7f44bf541bdee2cfce31b21bb9dce1054a6f814e6a833165f37cbd4af49d4 Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.789213 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:20 crc kubenswrapper[5030]: E1128 11:55:20.792130 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:21.292065249 +0000 UTC m=+139.233807932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.805206 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm"] Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.899168 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:20 crc kubenswrapper[5030]: E1128 11:55:20.899536 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:21.399524596 +0000 UTC m=+139.341267279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.921306 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4f6gt"] Nov 28 11:55:20 crc kubenswrapper[5030]: I1128 11:55:20.921359 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7c24l"] Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.000358 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:21 crc kubenswrapper[5030]: E1128 11:55:21.000513 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:21.500495202 +0000 UTC m=+139.442237885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.000645 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:21 crc kubenswrapper[5030]: E1128 11:55:21.000925 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:21.500918543 +0000 UTC m=+139.442661226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.103842 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:21 crc kubenswrapper[5030]: E1128 11:55:21.104538 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:21.604520643 +0000 UTC m=+139.546263326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.142614 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5xdv" podStartSLOduration=121.142592321 podStartE2EDuration="2m1.142592321s" podCreationTimestamp="2025-11-28 11:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:21.115328473 +0000 UTC m=+139.057071146" watchObservedRunningTime="2025-11-28 11:55:21.142592321 +0000 UTC m=+139.084335014" Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.202610 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw"] Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.205929 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:21 crc kubenswrapper[5030]: E1128 11:55:21.206366 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:21.706349063 +0000 UTC m=+139.648091746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.310525 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:21 crc kubenswrapper[5030]: E1128 11:55:21.310894 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:21.810857867 +0000 UTC m=+139.752600550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.365733 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2"] Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.385097 5030 generic.go:334] "Generic (PLEG): container finished" podID="41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d" containerID="c89bddca60280be1670fc31304bff1834fdb5816aadc35991c4e169917565722" exitCode=0 Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.385174 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" event={"ID":"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d","Type":"ContainerDied","Data":"c89bddca60280be1670fc31304bff1834fdb5816aadc35991c4e169917565722"} Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.403983 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4r4f7" event={"ID":"5e0cbf40-e788-44c2-9eba-ddd17d412551","Type":"ContainerStarted","Data":"2d9c4ef64e1b52a72a20a16cfd171447f4b3e25d7921435b6120655feec2f145"} Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.414946 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:21 crc kubenswrapper[5030]: E1128 11:55:21.415320 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:21.91530988 +0000 UTC m=+139.857052563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.415413 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" event={"ID":"cd9592cc-918c-4863-a561-61372a85c43f","Type":"ContainerStarted","Data":"592acbe817814a2c914b5c5f3a312b283d728edc05e9ef312d63e7e53ba2d0b0"} Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.426339 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kwjk5" event={"ID":"834d512d-9d01-48e1-a5a7-035d0e68cccd","Type":"ContainerStarted","Data":"d9f7f44bf541bdee2cfce31b21bb9dce1054a6f814e6a833165f37cbd4af49d4"} Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.432491 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm" event={"ID":"19432356-d767-4580-9cec-6366011c203c","Type":"ContainerStarted","Data":"c682752cc876f6e35733da16333a0d59eca55d90050cdf820626e08a5c257112"} Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.464899 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4f6gt" event={"ID":"f362eedc-734d-4cfd-831c-6dedca53f975","Type":"ContainerStarted","Data":"b2eb3764af2d35da4b41efa7fd9a4e32ab4a67e71486ff1353a860bd6c559deb"} Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.475065 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dd8jd" event={"ID":"e8700055-6a97-470b-93de-aefe1758239b","Type":"ContainerStarted","Data":"b7c301525446edf20870c112f753194ecf1a4f6de079fb38d29819e129e2df36"} Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.478117 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68" event={"ID":"feba0e47-9667-44da-ab70-50346b203fa6","Type":"ContainerStarted","Data":"f85468f2153282823703c73ed282d44dd72640936ce759b539c16bcf05894dc8"} Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.478871 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" event={"ID":"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45","Type":"ContainerStarted","Data":"2435322d7ce4adce38199a358b5aecc85e6b790b646b10acd33aef11419f04d2"} Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.486496 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pvtql" event={"ID":"f5575e8a-bac5-451e-9419-db009e281ea5","Type":"ContainerStarted","Data":"363c81d7845eb3ee22a3d717d3a11319eea0590e4919106f97e6b3aac55c0ce6"} Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.492525 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dz6n5" event={"ID":"273c4d4b-6972-435b-9fda-e802384dffd2","Type":"ContainerStarted","Data":"d6aa3612c33dc23879e4d6fa99e3bfe28220deed5b7f0570c08dc14f73a8238b"} Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.498644 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l6ggh" event={"ID":"1d2a9f2a-efa6-4d3a-b9ec-2d4b40376fc7","Type":"ContainerStarted","Data":"6cba65b5c78277951cdb3e138c28a3d33b75316460573e24f670dc4ace33a72a"} Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.500221 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kt5lf" event={"ID":"cf3562d8-1f85-460c-b49a-c2922d803c5a","Type":"ContainerStarted","Data":"8d516eaf06bffe2bee441686041923ce9e20de8a6b23b8faf726c6860d188ac7"} Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.503760 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47vf7" event={"ID":"d077d777-7c83-42d3-9c90-b9155040a1ea","Type":"ContainerStarted","Data":"71f75583761b3fdde3e4ab9a9ad25b9212edc88ea12ff195fdc974489ca8cb73"} Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.506137 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-84zsn" event={"ID":"0e7af0be-101e-4d83-92ab-c88b3cf47a55","Type":"ContainerStarted","Data":"d310e3b7ce70fcbda52342ea01a8bc12b5e2338c5eb4cf79d54a2aa9bb7499d3"} Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.509778 5030 generic.go:334] "Generic (PLEG): container finished" podID="95d6b274-7def-4790-b0ab-bae4d0f8d6db" containerID="60376dc91203fe03f6369d6fb72ed37a2fe8ed565e7eb7b6c801f8557375b1d1" exitCode=0 Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.510527 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" event={"ID":"95d6b274-7def-4790-b0ab-bae4d0f8d6db","Type":"ContainerDied","Data":"60376dc91203fe03f6369d6fb72ed37a2fe8ed565e7eb7b6c801f8557375b1d1"} Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.517561 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:21 crc kubenswrapper[5030]: E1128 11:55:21.517718 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:22.017689446 +0000 UTC m=+139.959432129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.517842 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:21 crc kubenswrapper[5030]: E1128 11:55:21.520080 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:22.020061482 +0000 UTC m=+139.961804165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.537415 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6gzzl" Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.633883 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:21 crc kubenswrapper[5030]: E1128 11:55:21.634382 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:22.134348927 +0000 UTC m=+140.076091610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.639350 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:21 crc kubenswrapper[5030]: E1128 11:55:21.652204 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:22.152153073 +0000 UTC m=+140.093895756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.742918 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:21 crc kubenswrapper[5030]: E1128 11:55:21.743107 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:22.24308389 +0000 UTC m=+140.184826573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.743257 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:21 crc kubenswrapper[5030]: E1128 11:55:21.743596 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:22.243588763 +0000 UTC m=+140.185331446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.758904 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" podStartSLOduration=119.758881819 podStartE2EDuration="1m59.758881819s" podCreationTimestamp="2025-11-28 11:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:21.756824862 +0000 UTC m=+139.698567555" watchObservedRunningTime="2025-11-28 11:55:21.758881819 +0000 UTC m=+139.700624502" Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.845085 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:21 crc kubenswrapper[5030]: E1128 11:55:21.845646 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:22.345621599 +0000 UTC m=+140.287364272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:21 crc kubenswrapper[5030]: I1128 11:55:21.953640 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:21 crc kubenswrapper[5030]: E1128 11:55:21.954068 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:22.454049243 +0000 UTC m=+140.395791926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.056401 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:22 crc kubenswrapper[5030]: E1128 11:55:22.056881 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:22.55685197 +0000 UTC m=+140.498594653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.057379 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:22 crc kubenswrapper[5030]: E1128 11:55:22.057818 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:22.557799147 +0000 UTC m=+140.499541830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.118717 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-47vf7" podStartSLOduration=121.118684288 podStartE2EDuration="2m1.118684288s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:21.910407341 +0000 UTC m=+139.852150024" watchObservedRunningTime="2025-11-28 11:55:22.118684288 +0000 UTC m=+140.060426971" Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.119848 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jhlzs"] Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.122786 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-frtvx"] Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.130444 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ttltj"] Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.130809 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-g77wg"] Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.159657 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9blt4"] Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.161016 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:22 crc kubenswrapper[5030]: E1128 11:55:22.161357 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:22.661325334 +0000 UTC m=+140.603068017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.161460 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:22 crc kubenswrapper[5030]: E1128 11:55:22.161890 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:22.661874469 +0000 UTC m=+140.603617142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.262278 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:22 crc kubenswrapper[5030]: E1128 11:55:22.263196 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:22.763172934 +0000 UTC m=+140.704915617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.272063 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz"] Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.273803 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mvvnj"] Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.275503 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b479q"] Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.357909 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xk2p8"] Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.368531 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:22 crc kubenswrapper[5030]: E1128 11:55:22.369045 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:22.869025596 +0000 UTC m=+140.810768279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.376381 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jjxhn"] Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.397091 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2lh2r"] Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.471517 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:22 crc kubenswrapper[5030]: E1128 11:55:22.472827 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:22.97278473 +0000 UTC m=+140.914527413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.576318 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:22 crc kubenswrapper[5030]: E1128 11:55:22.577219 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:23.077200932 +0000 UTC m=+141.018943605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:22 crc kubenswrapper[5030]: W1128 11:55:22.590101 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c3795a8_94c8_4eee_9791_f18e22d36c09.slice/crio-f68983a64f2ae86edce780621ff958f1b7a9456b70c6de423219f1eea78c1a1b WatchSource:0}: Error finding container f68983a64f2ae86edce780621ff958f1b7a9456b70c6de423219f1eea78c1a1b: Status 404 returned error can't find the container with id f68983a64f2ae86edce780621ff958f1b7a9456b70c6de423219f1eea78c1a1b Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.700944 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9blt4" event={"ID":"833da764-f289-48e0-9321-57c4cab21e41","Type":"ContainerStarted","Data":"2fa38664b230dcf045f7268b049a5b54a4de91ae1201e76541a08bc764be9ce8"} Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.700986 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2" event={"ID":"8a9a6f10-2c46-4625-8fc0-8522d9082086","Type":"ContainerStarted","Data":"d4d2f113b487e541a2fadbf9d8ed09899de4daa1a392f2587ffd2462f823ee18"} Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.700999 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-frdgb"] Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.701013 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dd8jd" event={"ID":"e8700055-6a97-470b-93de-aefe1758239b","Type":"ContainerStarted","Data":"d5504ec01f45be6b3fdd1b33b3271e121cd687767b76ddf38ba66ba2d1bcf561"} Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.701023 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ttltj" event={"ID":"1c8d2d2e-13c2-4efe-9012-706047ea21e5","Type":"ContainerStarted","Data":"cb73fd8a2597f21ad1a0a189b57cf9c3137087a346c7e9e471ded5f507cf1c9a"} Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.701216 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zv8vs" event={"ID":"3073870f-c73c-4fcd-8dbf-e8c210aaa197","Type":"ContainerStarted","Data":"cccf605e3c62794406c1dc7a46b8f821e84fbe6cc205afe14d688fc9dc694f58"} Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.702540 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:22 crc kubenswrapper[5030]: E1128 11:55:22.702982 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:23.202952457 +0000 UTC m=+141.144695140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.753887 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bdbjw" event={"ID":"a5c19601-52c5-40bd-8640-3fd0128e7b6a","Type":"ContainerStarted","Data":"b1e0ccba31430730b6fd59f6b5a3a0393eb3c4d12d65d3d8fd04e328139a8377"} Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.768950 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq" event={"ID":"7014aabc-8352-44c9-964a-30fdbbcb47d9","Type":"ContainerStarted","Data":"51f7f11b4d1f1e4e7d8a1fc406ac94820ad3540f148357178636ae2926c357bb"} Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.773728 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" event={"ID":"235ffe06-65ea-4f0e-90b8-1b9ed56df5bf","Type":"ContainerStarted","Data":"49e117dceac2c0d3942bee81348ff6d53b526619adba0ac0d4b5c8347b651718"} Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.783347 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kl7gk" event={"ID":"6d1107b9-bf5a-45de-a54c-79c38ba041c6","Type":"ContainerStarted","Data":"664f75f21b79ada99ec74dc8076d37b3c5c2d37acb3f96b645a4c569d552c427"} Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.813448 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:22 crc kubenswrapper[5030]: E1128 11:55:22.813782 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:23.313772517 +0000 UTC m=+141.255515200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.826856 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l6ggh" event={"ID":"1d2a9f2a-efa6-4d3a-b9ec-2d4b40376fc7","Type":"ContainerStarted","Data":"deb6213578d1c40def1e35fe1c7a12952db7c8e6a9a3dc6e3e3d93932fbf3881"} Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.832693 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-l6ggh" Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.854017 5030 patch_prober.go:28] interesting pod/downloads-7954f5f757-l6ggh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.854082 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l6ggh" podUID="1d2a9f2a-efa6-4d3a-b9ec-2d4b40376fc7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.854392 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-g77wg" event={"ID":"fe9419f1-075a-4031-8efa-f6b2302bece3","Type":"ContainerStarted","Data":"7abc4e13efba08b4668ff1a55c23414dd3bfd2e136820e1c584c58e5dd408196"} Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.860445 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" event={"ID":"cd9592cc-918c-4863-a561-61372a85c43f","Type":"ContainerStarted","Data":"33e13650c65a78fbd483f0bafccc3430deceabc79da53c0ded25ef6126e1ee79"} Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.861384 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.865676 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b479q" event={"ID":"f677af71-4fb9-41a5-99f4-59800a8de3b7","Type":"ContainerStarted","Data":"75b07229082308917afdaf48cb3b663090a78d01599275d30189e8697617b2ab"} Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.880033 5030 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-456s8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" start-of-body= Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.880666 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" podUID="cd9592cc-918c-4863-a561-61372a85c43f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.881917 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw" event={"ID":"89d4a423-452d-4b92-927e-38eadd969e03","Type":"ContainerStarted","Data":"d979ae4e7b2a17c2f2fd0e9b14d35c298d9dd031f79ee2462a6dcbaba58b2ed0"} Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.916722 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:22 crc kubenswrapper[5030]: E1128 11:55:22.918241 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:23.418225729 +0000 UTC m=+141.359968412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.923496 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pvtql" event={"ID":"f5575e8a-bac5-451e-9419-db009e281ea5","Type":"ContainerStarted","Data":"766fe279449b29969c5e9500b965fe062b0901031a59ae497406d36171a0f877"} Nov 28 11:55:22 crc kubenswrapper[5030]: I1128 11:55:22.953542 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26fxd" event={"ID":"ca2907d2-9fad-41b4-b625-19e05e2884c5","Type":"ContainerStarted","Data":"8b9367c1fd1632973cee58ecd48287cc709395e334f5bffb4e8bd28661b8c4c4"} Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.008488 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4r4f7" event={"ID":"5e0cbf40-e788-44c2-9eba-ddd17d412551","Type":"ContainerStarted","Data":"ff73029b677ef1bd9ab92ec4321c0fd6070ad75f13915840eccab00e5d912273"} Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.011213 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jhlzs" event={"ID":"f79ac060-fa0b-487f-a959-90da3f7e1fa5","Type":"ContainerStarted","Data":"507eabf6c28ff21311c399e258a58b49b567feed7e655a6394fbdc1cc1718e4f"} Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.019157 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:23 crc kubenswrapper[5030]: E1128 11:55:23.021066 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:23.521047647 +0000 UTC m=+141.462790330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.028558 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" event={"ID":"5733d243-c607-42f6-b76a-a4852d2771ff","Type":"ContainerStarted","Data":"a21afae63fbaa7697013d014d42422c28381d68049b38efcb01237c0d25153d3"} Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.122161 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:23 crc kubenswrapper[5030]: E1128 11:55:23.122487 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:23.622440145 +0000 UTC m=+141.564182828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.122591 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:23 crc kubenswrapper[5030]: E1128 11:55:23.124524 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:23.624503443 +0000 UTC m=+141.566246126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.212241 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" podStartSLOduration=122.21221902 podStartE2EDuration="2m2.21221902s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:23.19781406 +0000 UTC m=+141.139556743" watchObservedRunningTime="2025-11-28 11:55:23.21221902 +0000 UTC m=+141.153961703" Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.229840 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:23 crc kubenswrapper[5030]: E1128 11:55:23.232130 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:23.732108493 +0000 UTC m=+141.673851166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.256241 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-4r4f7" podStartSLOduration=122.256210203 podStartE2EDuration="2m2.256210203s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:23.092617686 +0000 UTC m=+141.034360369" watchObservedRunningTime="2025-11-28 11:55:23.256210203 +0000 UTC m=+141.197952886" Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.278625 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw" podStartSLOduration=123.278605976 podStartE2EDuration="2m3.278605976s" podCreationTimestamp="2025-11-28 11:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:23.236186946 +0000 UTC m=+141.177929629" watchObservedRunningTime="2025-11-28 11:55:23.278605976 +0000 UTC m=+141.220348659" Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.336547 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:23 crc kubenswrapper[5030]: E1128 11:55:23.336827 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:23.836815453 +0000 UTC m=+141.778558136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.346270 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-l6ggh" podStartSLOduration=122.346254945 podStartE2EDuration="2m2.346254945s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:23.300794532 +0000 UTC m=+141.242537215" watchObservedRunningTime="2025-11-28 11:55:23.346254945 +0000 UTC m=+141.287997628" Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.393242 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pvtql" podStartSLOduration=122.393223231 podStartE2EDuration="2m2.393223231s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:23.34786104 +0000 UTC m=+141.289603723" watchObservedRunningTime="2025-11-28 11:55:23.393223231 +0000 UTC m=+141.334965914" Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.437245 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:23 crc kubenswrapper[5030]: E1128 11:55:23.437568 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:23.937549813 +0000 UTC m=+141.879292496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.491572 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lr5mq" podStartSLOduration=122.491529043 podStartE2EDuration="2m2.491529043s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:23.478369827 +0000 UTC m=+141.420112510" watchObservedRunningTime="2025-11-28 11:55:23.491529043 +0000 UTC m=+141.433271736" Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.491933 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-r2vs9" podStartSLOduration=122.491926254 podStartE2EDuration="2m2.491926254s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:23.402211821 +0000 UTC m=+141.343954504" watchObservedRunningTime="2025-11-28 11:55:23.491926254 +0000 UTC m=+141.433668937" Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.558554 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:23 crc kubenswrapper[5030]: E1128 11:55:23.559094 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:24.059071 +0000 UTC m=+142.000813683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.663322 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:23 crc kubenswrapper[5030]: E1128 11:55:23.663574 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:24.163533294 +0000 UTC m=+142.105275977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.663954 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:23 crc kubenswrapper[5030]: E1128 11:55:23.664418 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:24.164393267 +0000 UTC m=+142.106135950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.768038 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:23 crc kubenswrapper[5030]: E1128 11:55:23.768318 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:24.268278094 +0000 UTC m=+142.210020777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.873202 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:23 crc kubenswrapper[5030]: E1128 11:55:23.876436 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:24.37640175 +0000 UTC m=+142.318144483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:23 crc kubenswrapper[5030]: I1128 11:55:23.976455 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:23 crc kubenswrapper[5030]: E1128 11:55:23.976928 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:24.476906372 +0000 UTC m=+142.418649055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.077970 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:24 crc kubenswrapper[5030]: E1128 11:55:24.078718 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:24.578701102 +0000 UTC m=+142.520443785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.127937 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4f6gt" event={"ID":"f362eedc-734d-4cfd-831c-6dedca53f975","Type":"ContainerStarted","Data":"9450c4e6a860df67cae61d90217227a5e4b82ae95cd4128f0ed7b799152b823a"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.158337 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-frdgb" event={"ID":"1de8e8de-7aad-4d28-937b-d13eea43e672","Type":"ContainerStarted","Data":"9c383a19d586d376615babcf8febb56f02ac5c12769e964a7cf86c8288be5ced"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.179185 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.180267 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" event={"ID":"235ffe06-65ea-4f0e-90b8-1b9ed56df5bf","Type":"ContainerStarted","Data":"c6e441200129c812d062ae3d3eaede9d5ab531c39453c4d0a60ca97addcb2d9b"} Nov 28 11:55:24 crc kubenswrapper[5030]: E1128 11:55:24.180568 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:24.680542352 +0000 UTC m=+142.622285035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.181577 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.195766 5030 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-frtvx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.195814 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" podUID="235ffe06-65ea-4f0e-90b8-1b9ed56df5bf" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.240555 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" podStartSLOduration=123.24053396 podStartE2EDuration="2m3.24053396s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:24.238266026 +0000 UTC m=+142.180008709" watchObservedRunningTime="2025-11-28 11:55:24.24053396 +0000 UTC m=+142.182276643" Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.260902 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm" event={"ID":"19432356-d767-4580-9cec-6366011c203c","Type":"ContainerStarted","Data":"efa9d23d37f4003b03b252849de98e78bafc1e984e8f39ddd67e900cda5c43cd"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.283970 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:24 crc kubenswrapper[5030]: E1128 11:55:24.284492 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:24.78444858 +0000 UTC m=+142.726191263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.298952 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kwjk5" event={"ID":"834d512d-9d01-48e1-a5a7-035d0e68cccd","Type":"ContainerStarted","Data":"bf7b087a980e88312916b5649c7013849f662baa2d40e49e720fbe2c54d474f7"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.345673 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-btkdm" podStartSLOduration=123.345655381 podStartE2EDuration="2m3.345655381s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:24.322982491 +0000 UTC m=+142.264725174" watchObservedRunningTime="2025-11-28 11:55:24.345655381 +0000 UTC m=+142.287398064" Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.387405 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:24 crc kubenswrapper[5030]: E1128 11:55:24.391271 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:24.891237138 +0000 UTC m=+142.832979821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.393278 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2lh2r" event={"ID":"814e1602-11f5-41ce-be92-9cefbb6dbe78","Type":"ContainerStarted","Data":"1575589f3d7c8be5460ebdce0013e458f263b58a992d6ac11bef10fc24ca7762"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.426007 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kwjk5" podStartSLOduration=123.425973603 podStartE2EDuration="2m3.425973603s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:24.378030341 +0000 UTC m=+142.319773014" watchObservedRunningTime="2025-11-28 11:55:24.425973603 +0000 UTC m=+142.367716286" Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.487644 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2" event={"ID":"8a9a6f10-2c46-4625-8fc0-8522d9082086","Type":"ContainerStarted","Data":"54a547395d9d0804d4ec57b0ceece038413dd7cd81545944c9700e90663dd128"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.487707 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" event={"ID":"95d6b274-7def-4790-b0ab-bae4d0f8d6db","Type":"ContainerStarted","Data":"ea4973274b33816f32672c746945930eaa09b53b7752c5a99c3607377460cdc1"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.487733 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2" Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.512314 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:24 crc kubenswrapper[5030]: E1128 11:55:24.512653 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:25.012640972 +0000 UTC m=+142.954383655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.515161 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dz6n5" event={"ID":"273c4d4b-6972-435b-9fda-e802384dffd2","Type":"ContainerStarted","Data":"a000fcc6af5803842848e62112e35b0129dab4f35af7287ef61fe4e1aa6ec44c"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.533066 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2" Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.536119 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ndgk2" podStartSLOduration=122.536091974 podStartE2EDuration="2m2.536091974s" podCreationTimestamp="2025-11-28 11:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:24.514903805 +0000 UTC m=+142.456646508" watchObservedRunningTime="2025-11-28 11:55:24.536091974 +0000 UTC m=+142.477834647" Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.536214 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kl7gk" event={"ID":"6d1107b9-bf5a-45de-a54c-79c38ba041c6","Type":"ContainerStarted","Data":"5c0b2763fa35787eda90723b8ec207b81a83a2b9c2b025cf94b51cf3a83f6613"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.556751 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-dz6n5" podStartSLOduration=123.556733248 podStartE2EDuration="2m3.556733248s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:24.556075899 +0000 UTC m=+142.497818592" watchObservedRunningTime="2025-11-28 11:55:24.556733248 +0000 UTC m=+142.498475931" Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.557418 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2p8" event={"ID":"3c3795a8-94c8-4eee-9791-f18e22d36c09","Type":"ContainerStarted","Data":"9eff26a906969909fa3c8380a30c7b9bd9f6b5b09f6adfa14058f30e6b24cc82"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.557450 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2p8" event={"ID":"3c3795a8-94c8-4eee-9791-f18e22d36c09","Type":"ContainerStarted","Data":"f68983a64f2ae86edce780621ff958f1b7a9456b70c6de423219f1eea78c1a1b"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.597850 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68" event={"ID":"feba0e47-9667-44da-ab70-50346b203fa6","Type":"ContainerStarted","Data":"886eeeded5942915ef2b0f9bf0ac559d459a724db513ba5ff3f98ed31cf86469"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.617941 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:24 crc kubenswrapper[5030]: E1128 11:55:24.619252 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:25.119232504 +0000 UTC m=+143.060975197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.623926 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kl7gk" podStartSLOduration=7.623906455 podStartE2EDuration="7.623906455s" podCreationTimestamp="2025-11-28 11:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:24.623230905 +0000 UTC m=+142.564973588" watchObservedRunningTime="2025-11-28 11:55:24.623906455 +0000 UTC m=+142.565649128" Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.689882 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-84zsn" event={"ID":"0e7af0be-101e-4d83-92ab-c88b3cf47a55","Type":"ContainerStarted","Data":"1d3436cfca703ad6670b3baaef7e8c0702fc368d0d7a93e03c5614df7152dcbf"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.690452 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.722885 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:24 crc kubenswrapper[5030]: E1128 11:55:24.723208 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:25.223196203 +0000 UTC m=+143.164938886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.736299 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" event={"ID":"d0531246-fb61-45e1-943f-dbba72d91633","Type":"ContainerStarted","Data":"aadbe0dd549d7087678f7b6993f0c126657c7532551d055523ccbf3413c3f9dc"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.737317 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lr5mq" event={"ID":"0d09d33a-6040-4fd1-85a5-ac3a1ca5a913","Type":"ContainerStarted","Data":"fe974fdd80d3122c58a434a59455a798b1de0071f641e65f2ef49d1e215e38d9"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.753680 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz" event={"ID":"62975b1e-d898-42d0-8f46-27c47287d53b","Type":"ContainerStarted","Data":"2dfb093dd5202edbce8be23c18a369cd41656b2fdc95ed56d808d8ae462a5d24"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.753744 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz" event={"ID":"62975b1e-d898-42d0-8f46-27c47287d53b","Type":"ContainerStarted","Data":"de4702671dd5c2f08f937e319a906563f15ecc0fa5483797db66b96f4e3d72d1"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.755010 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz" Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.755423 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-84zsn" podStartSLOduration=123.755387378 podStartE2EDuration="2m3.755387378s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:24.75221084 +0000 UTC m=+142.693953533" watchObservedRunningTime="2025-11-28 11:55:24.755387378 +0000 UTC m=+142.697130061" Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.782162 5030 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-wkwgz container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.782233 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz" podUID="62975b1e-d898-42d0-8f46-27c47287d53b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.812367 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2p8" podStartSLOduration=122.812350961 podStartE2EDuration="2m2.812350961s" podCreationTimestamp="2025-11-28 11:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:24.811379084 +0000 UTC m=+142.753121777" watchObservedRunningTime="2025-11-28 11:55:24.812350961 +0000 UTC m=+142.754093644" Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.816826 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b479q" event={"ID":"f677af71-4fb9-41a5-99f4-59800a8de3b7","Type":"ContainerStarted","Data":"41e6a88c5b48eb91bc3c55ea39b8a9aa18a460e9287ef18f81e11fc4b3228a3e"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.827957 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:24 crc kubenswrapper[5030]: E1128 11:55:24.829966 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:25.329941881 +0000 UTC m=+143.271684564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.837934 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jjxhn" event={"ID":"4d1425f4-1e94-443c-bb47-1a473f584069","Type":"ContainerStarted","Data":"a52fe36919ca30ccdf3c581e4c5f3cf93ece2f92c19768a44e7dd08405c04833"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.851814 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" event={"ID":"41d8e0bc-b4df-4f4f-91b5-bc90c1e1f18d","Type":"ContainerStarted","Data":"7e95cce954162bbe563a82c0175a644decd288b4bf3017fcd416658abf0e20b2"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.866769 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jhlzs" event={"ID":"f79ac060-fa0b-487f-a959-90da3f7e1fa5","Type":"ContainerStarted","Data":"1f0b97c02267e20040d444825013dff92d5452d5803244994568992fe5358866"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.883885 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw" event={"ID":"89d4a423-452d-4b92-927e-38eadd969e03","Type":"ContainerStarted","Data":"36ad4e43dd9f48d425f297a99dbb116a383665057c1470f6cd260392d039a1b8"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.894929 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" event={"ID":"75c49ec5-bda8-4cd4-a64a-10bd4ef5bf45","Type":"ContainerStarted","Data":"e2300786d7646725b3383d2ba2d2045de72f299c70cc3ce2de989f9c5369772c"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.922982 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kt5lf" event={"ID":"cf3562d8-1f85-460c-b49a-c2922d803c5a","Type":"ContainerStarted","Data":"c49a2675701085bae7d687f2c29eec72fbe7e3e842afeb7cdb08829c7bb9510d"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.923055 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kt5lf" event={"ID":"cf3562d8-1f85-460c-b49a-c2922d803c5a","Type":"ContainerStarted","Data":"2173c1e7001c0f968e1c5d11e1aea6e3a9db66f8a5613f33c2e2023c5936797e"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.930760 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:24 crc kubenswrapper[5030]: E1128 11:55:24.942341 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:25.435193136 +0000 UTC m=+143.376935819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.950963 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ttltj" event={"ID":"1c8d2d2e-13c2-4efe-9012-706047ea21e5","Type":"ContainerStarted","Data":"c8f107d861a8560e06724cf93f5402133a2fad4577ca4e648a43cf4579596d56"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.953265 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ppq68" podStartSLOduration=123.953253408 podStartE2EDuration="2m3.953253408s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:24.947439046 +0000 UTC m=+142.889181729" watchObservedRunningTime="2025-11-28 11:55:24.953253408 +0000 UTC m=+142.894996091" Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.990917 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9blt4" event={"ID":"833da764-f289-48e0-9321-57c4cab21e41","Type":"ContainerStarted","Data":"d9c320982c7fc14fb16cbd830d2167b9feef184373838b35224ef55909aba6e6"} Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.991536 5030 patch_prober.go:28] interesting pod/downloads-7954f5f757-l6ggh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.991586 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l6ggh" podUID="1d2a9f2a-efa6-4d3a-b9ec-2d4b40376fc7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.992015 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq" Nov 28 11:55:24 crc kubenswrapper[5030]: I1128 11:55:24.993065 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dd8jd" Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.013863 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.023530 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dd8jd" Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.028527 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.032420 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:25 crc kubenswrapper[5030]: E1128 11:55:25.032837 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:25.532812919 +0000 UTC m=+143.474555602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.032960 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:25 crc kubenswrapper[5030]: E1128 11:55:25.034310 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:25.53428964 +0000 UTC m=+143.476032323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.039080 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz" podStartSLOduration=123.039057732 podStartE2EDuration="2m3.039057732s" podCreationTimestamp="2025-11-28 11:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:24.987264633 +0000 UTC m=+142.929007326" watchObservedRunningTime="2025-11-28 11:55:25.039057732 +0000 UTC m=+142.980800405" Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.042057 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jjxhn" podStartSLOduration=124.042048685 podStartE2EDuration="2m4.042048685s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:25.034017492 +0000 UTC m=+142.975760175" watchObservedRunningTime="2025-11-28 11:55:25.042048685 +0000 UTC m=+142.983791368" Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.048834 5030 patch_prober.go:28] interesting pod/router-default-5444994796-dz6n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:55:25 crc kubenswrapper[5030]: [-]has-synced failed: reason withheld Nov 28 11:55:25 crc kubenswrapper[5030]: [+]process-running ok Nov 28 11:55:25 crc kubenswrapper[5030]: healthz check failed Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.048905 5030 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz6n5" podUID="273c4d4b-6972-435b-9fda-e802384dffd2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.086874 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b479q" podStartSLOduration=124.08685141 podStartE2EDuration="2m4.08685141s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:25.086036108 +0000 UTC m=+143.027778791" watchObservedRunningTime="2025-11-28 11:55:25.08685141 +0000 UTC m=+143.028594083" Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.136191 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:25 crc kubenswrapper[5030]: E1128 11:55:25.138178 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:25.638147536 +0000 UTC m=+143.579890219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.212961 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-7c24l" podStartSLOduration=124.212937285 podStartE2EDuration="2m4.212937285s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:25.150541111 +0000 UTC m=+143.092283794" watchObservedRunningTime="2025-11-28 11:55:25.212937285 +0000 UTC m=+143.154679968" Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.213149 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kt5lf" podStartSLOduration=125.21314274 podStartE2EDuration="2m5.21314274s" podCreationTimestamp="2025-11-28 11:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:25.211370141 +0000 UTC m=+143.153112824" watchObservedRunningTime="2025-11-28 11:55:25.21314274 +0000 UTC m=+143.154885423" Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.231001 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq" Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.245320 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:25 crc kubenswrapper[5030]: E1128 11:55:25.245915 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:25.745891031 +0000 UTC m=+143.687633714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.287201 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-jhlzs" podStartSLOduration=9.287181378 podStartE2EDuration="9.287181378s" podCreationTimestamp="2025-11-28 11:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:25.262806631 +0000 UTC m=+143.204549324" watchObservedRunningTime="2025-11-28 11:55:25.287181378 +0000 UTC m=+143.228924061" Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.288139 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" podStartSLOduration=123.288131755 podStartE2EDuration="2m3.288131755s" podCreationTimestamp="2025-11-28 11:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:25.286258713 +0000 UTC m=+143.228001396" watchObservedRunningTime="2025-11-28 11:55:25.288131755 +0000 UTC m=+143.229874438" Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.331748 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dd8jd" podStartSLOduration=123.331728706 podStartE2EDuration="2m3.331728706s" podCreationTimestamp="2025-11-28 11:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:25.330265326 +0000 UTC m=+143.272008009" watchObservedRunningTime="2025-11-28 11:55:25.331728706 +0000 UTC m=+143.273471389" Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.348224 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:25 crc kubenswrapper[5030]: E1128 11:55:25.348811 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:25.84879031 +0000 UTC m=+143.790532993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.368234 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zv8vs" podStartSLOduration=124.36821728 podStartE2EDuration="2m4.36821728s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:25.365897135 +0000 UTC m=+143.307639818" watchObservedRunningTime="2025-11-28 11:55:25.36821728 +0000 UTC m=+143.309959963" Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.395671 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-9blt4" podStartSLOduration=123.395655283 podStartE2EDuration="2m3.395655283s" podCreationTimestamp="2025-11-28 11:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:25.394698086 +0000 UTC m=+143.336440769" watchObservedRunningTime="2025-11-28 11:55:25.395655283 +0000 UTC m=+143.337397966" Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.452531 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:25 crc kubenswrapper[5030]: E1128 11:55:25.452902 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:25.952888214 +0000 UTC m=+143.894630897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.463765 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dtwzq" podStartSLOduration=125.463748175 podStartE2EDuration="2m5.463748175s" podCreationTimestamp="2025-11-28 11:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:25.461003349 +0000 UTC m=+143.402746032" watchObservedRunningTime="2025-11-28 11:55:25.463748175 +0000 UTC m=+143.405490858" Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.493737 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ttltj" podStartSLOduration=124.493719339 podStartE2EDuration="2m4.493719339s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:25.491894747 +0000 UTC m=+143.433637420" watchObservedRunningTime="2025-11-28 11:55:25.493719339 +0000 UTC m=+143.435462022" Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.558227 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:25 crc kubenswrapper[5030]: E1128 11:55:25.558702 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:26.058669003 +0000 UTC m=+144.000411686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.661924 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:25 crc kubenswrapper[5030]: E1128 11:55:25.662587 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:26.162571291 +0000 UTC m=+144.104313964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.763521 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:25 crc kubenswrapper[5030]: E1128 11:55:25.763777 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:26.263730053 +0000 UTC m=+144.205472736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.764352 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:25 crc kubenswrapper[5030]: E1128 11:55:25.764910 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:26.264895065 +0000 UTC m=+144.206637748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.866139 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:25 crc kubenswrapper[5030]: E1128 11:55:25.866385 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:26.366346105 +0000 UTC m=+144.308088788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.866521 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:25 crc kubenswrapper[5030]: E1128 11:55:25.866969 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:26.366959541 +0000 UTC m=+144.308702224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.968567 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:25 crc kubenswrapper[5030]: E1128 11:55:25.968793 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:26.468761101 +0000 UTC m=+144.410503784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.968989 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:25 crc kubenswrapper[5030]: E1128 11:55:25.969358 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:26.469346097 +0000 UTC m=+144.411088970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:25 crc kubenswrapper[5030]: I1128 11:55:25.995839 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jjxhn" event={"ID":"4d1425f4-1e94-443c-bb47-1a473f584069","Type":"ContainerStarted","Data":"40f694f9b6a07d52bb4cac3c98af8472e6369bc4da0ead8e31e94aafef206773"} Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:25.999990 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26fxd" event={"ID":"ca2907d2-9fad-41b4-b625-19e05e2884c5","Type":"ContainerStarted","Data":"01491a2cd1b5b92dd2cda594cf12bcdfcf2d24b6a54ff026dcb1ec8318025ba1"} Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.001957 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-frdgb" event={"ID":"1de8e8de-7aad-4d28-937b-d13eea43e672","Type":"ContainerStarted","Data":"5991c9e029773cb44153231b8d4506b43cbd71c8aaa6f06729193fac3aa6d74a"} Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.003767 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2lh2r" event={"ID":"814e1602-11f5-41ce-be92-9cefbb6dbe78","Type":"ContainerStarted","Data":"910ddda5b77d17ba9e8ef48750f62ee952835de98e114c2aa47f59349ab183ea"} Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.003819 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2lh2r" event={"ID":"814e1602-11f5-41ce-be92-9cefbb6dbe78","Type":"ContainerStarted","Data":"2ac1e374220d2df36eedbc484d0ee67c4d6d55d0f58227c401a06ea6f25c8248"} Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.004227 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2lh2r" Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.006568 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bdbjw" event={"ID":"a5c19601-52c5-40bd-8640-3fd0128e7b6a","Type":"ContainerStarted","Data":"d29ec4220f073289ea73a2d15f9440b7a9fecf6e17fb0bb61903f55a65c5fc4f"} Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.009348 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" event={"ID":"95d6b274-7def-4790-b0ab-bae4d0f8d6db","Type":"ContainerStarted","Data":"72a20b53bf3b0bd68496dde1600e5676d952f18e6c07fb55a9eabb6795a0e813"} Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.014810 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ttltj" event={"ID":"1c8d2d2e-13c2-4efe-9012-706047ea21e5","Type":"ContainerStarted","Data":"f678cdb451eeb1087bf799b4ea32ecb87329fbbb9e284b219f5ac319c320c393"} Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.016981 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" event={"ID":"d0531246-fb61-45e1-943f-dbba72d91633","Type":"ContainerStarted","Data":"95704cf76c2688671453689be6d9bfa952ac87bb64f420d449115b3579b57a8c"} Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.017004 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" event={"ID":"d0531246-fb61-45e1-943f-dbba72d91633","Type":"ContainerStarted","Data":"6f48ec9235f41e2b1c3cd0f21e2eac38eb5ce3f114970177f511ed3e0b718286"} Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.018180 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b479q" event={"ID":"f677af71-4fb9-41a5-99f4-59800a8de3b7","Type":"ContainerStarted","Data":"9210c1797437a3924ff040b87205da296e114ec2a4384c7a8fec04ac01bdeee0"} Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.020492 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4f6gt" event={"ID":"f362eedc-734d-4cfd-831c-6dedca53f975","Type":"ContainerStarted","Data":"ceb70d9ca00ba53e9cb1ba3fa48676cdb69e604d03f4702021ac1f894bf63caa"} Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.034841 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-g77wg" event={"ID":"fe9419f1-075a-4031-8efa-f6b2302bece3","Type":"ContainerStarted","Data":"aaa6a05610a9a1af871ab4a902d556c8936323af957edc7f294facd7a38500fb"} Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.034885 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-g77wg" Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.034894 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-g77wg" event={"ID":"fe9419f1-075a-4031-8efa-f6b2302bece3","Type":"ContainerStarted","Data":"ad35821dc8c2576bc5c811bad4c4c3c2748553bf64cbda04b0026308bb0beff8"} Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.043883 5030 patch_prober.go:28] interesting pod/router-default-5444994796-dz6n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:55:26 crc kubenswrapper[5030]: [-]has-synced failed: reason withheld Nov 28 11:55:26 crc kubenswrapper[5030]: [+]process-running ok Nov 28 11:55:26 crc kubenswrapper[5030]: healthz check failed Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.043929 5030 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz6n5" podUID="273c4d4b-6972-435b-9fda-e802384dffd2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.044468 5030 patch_prober.go:28] interesting pod/downloads-7954f5f757-l6ggh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.044505 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l6ggh" podUID="1d2a9f2a-efa6-4d3a-b9ec-2d4b40376fc7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.045515 5030 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-frtvx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.045543 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" podUID="235ffe06-65ea-4f0e-90b8-1b9ed56df5bf" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.047292 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26fxd" podStartSLOduration=125.047266683 podStartE2EDuration="2m5.047266683s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:26.044772683 +0000 UTC m=+143.986515366" watchObservedRunningTime="2025-11-28 11:55:26.047266683 +0000 UTC m=+143.989009366" Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.057237 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wkwgz" Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.069609 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:26 crc kubenswrapper[5030]: E1128 11:55:26.095654 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:26.595629677 +0000 UTC m=+144.537372360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.102780 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" podStartSLOduration=125.102760385 podStartE2EDuration="2m5.102760385s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:26.10256483 +0000 UTC m=+144.044307533" watchObservedRunningTime="2025-11-28 11:55:26.102760385 +0000 UTC m=+144.044503058" Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.183205 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:26 crc kubenswrapper[5030]: E1128 11:55:26.183803 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:26.683779437 +0000 UTC m=+144.625522120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.215790 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-frdgb" podStartSLOduration=125.215767225 podStartE2EDuration="2m5.215767225s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:26.161857298 +0000 UTC m=+144.103599981" watchObservedRunningTime="2025-11-28 11:55:26.215767225 +0000 UTC m=+144.157509908" Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.267549 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-4f6gt" podStartSLOduration=125.267524024 podStartE2EDuration="2m5.267524024s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:26.217063301 +0000 UTC m=+144.158805984" watchObservedRunningTime="2025-11-28 11:55:26.267524024 +0000 UTC m=+144.209266717" Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.284739 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:26 crc kubenswrapper[5030]: E1128 11:55:26.285104 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:26.785089912 +0000 UTC m=+144.726832595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.312131 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-g77wg" podStartSLOduration=10.312107343 podStartE2EDuration="10.312107343s" podCreationTimestamp="2025-11-28 11:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:26.282261214 +0000 UTC m=+144.224003897" watchObservedRunningTime="2025-11-28 11:55:26.312107343 +0000 UTC m=+144.253850026" Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.312835 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-bdbjw" podStartSLOduration=125.312828134 podStartE2EDuration="2m5.312828134s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:26.308417941 +0000 UTC m=+144.250160634" watchObservedRunningTime="2025-11-28 11:55:26.312828134 +0000 UTC m=+144.254570817" Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.357100 5030 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.357548 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2lh2r" podStartSLOduration=124.357526166 podStartE2EDuration="2m4.357526166s" podCreationTimestamp="2025-11-28 11:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:26.333489778 +0000 UTC m=+144.275232471" watchObservedRunningTime="2025-11-28 11:55:26.357526166 +0000 UTC m=+144.299268849" Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.389505 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:26 crc kubenswrapper[5030]: E1128 11:55:26.389879 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:26.889866125 +0000 UTC m=+144.831608808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.490534 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:26 crc kubenswrapper[5030]: E1128 11:55:26.490703 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 11:55:26.990674936 +0000 UTC m=+144.932417629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.490788 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:26 crc kubenswrapper[5030]: E1128 11:55:26.491111 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 11:55:26.991098928 +0000 UTC m=+144.932841611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8vhfh" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.541419 5030 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-28T11:55:26.357135274Z","Handler":null,"Name":""} Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.544960 5030 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.544989 5030 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.592168 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.601566 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.693493 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.696573 5030 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.696603 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.742109 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8vhfh\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:26 crc kubenswrapper[5030]: I1128 11:55:26.922020 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.038665 5030 patch_prober.go:28] interesting pod/router-default-5444994796-dz6n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:55:27 crc kubenswrapper[5030]: [-]has-synced failed: reason withheld Nov 28 11:55:27 crc kubenswrapper[5030]: [+]process-running ok Nov 28 11:55:27 crc kubenswrapper[5030]: healthz check failed Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.039071 5030 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz6n5" podUID="273c4d4b-6972-435b-9fda-e802384dffd2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.075632 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" event={"ID":"d0531246-fb61-45e1-943f-dbba72d91633","Type":"ContainerStarted","Data":"52dda0d76aeae7eff020454903b225e5b092081f476da419ebafad652c4d6bc3"} Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.075700 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" event={"ID":"d0531246-fb61-45e1-943f-dbba72d91633","Type":"ContainerStarted","Data":"4630ae3660bb33c6b3eaae076f38f88281b387c663dcebbfc8e57760ceacdc32"} Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.076627 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lq47d"] Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.083867 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq47d" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.088572 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.101167 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lq47d"] Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.103719 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-mvvnj" podStartSLOduration=10.103701134 podStartE2EDuration="10.103701134s" podCreationTimestamp="2025-11-28 11:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:27.103107357 +0000 UTC m=+145.044850040" watchObservedRunningTime="2025-11-28 11:55:27.103701134 +0000 UTC m=+145.045443827" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.200397 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8k5q\" (UniqueName: \"kubernetes.io/projected/e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c-kube-api-access-n8k5q\") pod \"certified-operators-lq47d\" (UID: \"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c\") " pod="openshift-marketplace/certified-operators-lq47d" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.200445 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c-catalog-content\") pod \"certified-operators-lq47d\" (UID: \"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c\") " pod="openshift-marketplace/certified-operators-lq47d" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.200518 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c-utilities\") pod \"certified-operators-lq47d\" (UID: \"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c\") " pod="openshift-marketplace/certified-operators-lq47d" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.243290 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8vhfh"] Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.280042 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7c95t"] Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.281340 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7c95t" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.283160 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.308924 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7c95t"] Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.310213 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8k5q\" (UniqueName: \"kubernetes.io/projected/e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c-kube-api-access-n8k5q\") pod \"certified-operators-lq47d\" (UID: \"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c\") " pod="openshift-marketplace/certified-operators-lq47d" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.310248 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c-catalog-content\") pod \"certified-operators-lq47d\" (UID: \"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c\") " pod="openshift-marketplace/certified-operators-lq47d" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.310280 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c-utilities\") pod \"certified-operators-lq47d\" (UID: \"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c\") " pod="openshift-marketplace/certified-operators-lq47d" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.310752 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c-utilities\") pod \"certified-operators-lq47d\" (UID: \"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c\") " pod="openshift-marketplace/certified-operators-lq47d" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.311281 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c-catalog-content\") pod \"certified-operators-lq47d\" (UID: \"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c\") " pod="openshift-marketplace/certified-operators-lq47d" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.371658 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8k5q\" (UniqueName: \"kubernetes.io/projected/e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c-kube-api-access-n8k5q\") pod \"certified-operators-lq47d\" (UID: \"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c\") " pod="openshift-marketplace/certified-operators-lq47d" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.411495 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e626568d-b431-46f4-ad61-429b99eec2a9-utilities\") pod \"community-operators-7c95t\" (UID: \"e626568d-b431-46f4-ad61-429b99eec2a9\") " pod="openshift-marketplace/community-operators-7c95t" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.411539 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e626568d-b431-46f4-ad61-429b99eec2a9-catalog-content\") pod \"community-operators-7c95t\" (UID: \"e626568d-b431-46f4-ad61-429b99eec2a9\") " pod="openshift-marketplace/community-operators-7c95t" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.411557 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jszjr\" (UniqueName: \"kubernetes.io/projected/e626568d-b431-46f4-ad61-429b99eec2a9-kube-api-access-jszjr\") pod \"community-operators-7c95t\" (UID: \"e626568d-b431-46f4-ad61-429b99eec2a9\") " pod="openshift-marketplace/community-operators-7c95t" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.423751 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq47d" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.474674 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5c98g"] Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.477555 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5c98g" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.488085 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5c98g"] Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.515148 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e626568d-b431-46f4-ad61-429b99eec2a9-utilities\") pod \"community-operators-7c95t\" (UID: \"e626568d-b431-46f4-ad61-429b99eec2a9\") " pod="openshift-marketplace/community-operators-7c95t" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.515214 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14544b4d-bde9-4481-abad-20b1d1c14d72-utilities\") pod \"certified-operators-5c98g\" (UID: \"14544b4d-bde9-4481-abad-20b1d1c14d72\") " pod="openshift-marketplace/certified-operators-5c98g" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.515236 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e626568d-b431-46f4-ad61-429b99eec2a9-catalog-content\") pod \"community-operators-7c95t\" (UID: \"e626568d-b431-46f4-ad61-429b99eec2a9\") " pod="openshift-marketplace/community-operators-7c95t" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.515266 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jszjr\" (UniqueName: \"kubernetes.io/projected/e626568d-b431-46f4-ad61-429b99eec2a9-kube-api-access-jszjr\") pod \"community-operators-7c95t\" (UID: \"e626568d-b431-46f4-ad61-429b99eec2a9\") " pod="openshift-marketplace/community-operators-7c95t" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.515343 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mlfc\" (UniqueName: \"kubernetes.io/projected/14544b4d-bde9-4481-abad-20b1d1c14d72-kube-api-access-4mlfc\") pod \"certified-operators-5c98g\" (UID: \"14544b4d-bde9-4481-abad-20b1d1c14d72\") " pod="openshift-marketplace/certified-operators-5c98g" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.515363 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14544b4d-bde9-4481-abad-20b1d1c14d72-catalog-content\") pod \"certified-operators-5c98g\" (UID: \"14544b4d-bde9-4481-abad-20b1d1c14d72\") " pod="openshift-marketplace/certified-operators-5c98g" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.515741 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e626568d-b431-46f4-ad61-429b99eec2a9-catalog-content\") pod \"community-operators-7c95t\" (UID: \"e626568d-b431-46f4-ad61-429b99eec2a9\") " pod="openshift-marketplace/community-operators-7c95t" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.516971 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e626568d-b431-46f4-ad61-429b99eec2a9-utilities\") pod \"community-operators-7c95t\" (UID: \"e626568d-b431-46f4-ad61-429b99eec2a9\") " pod="openshift-marketplace/community-operators-7c95t" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.533687 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jszjr\" (UniqueName: \"kubernetes.io/projected/e626568d-b431-46f4-ad61-429b99eec2a9-kube-api-access-jszjr\") pod \"community-operators-7c95t\" (UID: \"e626568d-b431-46f4-ad61-429b99eec2a9\") " pod="openshift-marketplace/community-operators-7c95t" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.616451 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14544b4d-bde9-4481-abad-20b1d1c14d72-utilities\") pod \"certified-operators-5c98g\" (UID: \"14544b4d-bde9-4481-abad-20b1d1c14d72\") " pod="openshift-marketplace/certified-operators-5c98g" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.617427 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mlfc\" (UniqueName: \"kubernetes.io/projected/14544b4d-bde9-4481-abad-20b1d1c14d72-kube-api-access-4mlfc\") pod \"certified-operators-5c98g\" (UID: \"14544b4d-bde9-4481-abad-20b1d1c14d72\") " pod="openshift-marketplace/certified-operators-5c98g" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.617452 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14544b4d-bde9-4481-abad-20b1d1c14d72-catalog-content\") pod \"certified-operators-5c98g\" (UID: \"14544b4d-bde9-4481-abad-20b1d1c14d72\") " pod="openshift-marketplace/certified-operators-5c98g" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.617452 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14544b4d-bde9-4481-abad-20b1d1c14d72-utilities\") pod \"certified-operators-5c98g\" (UID: \"14544b4d-bde9-4481-abad-20b1d1c14d72\") " pod="openshift-marketplace/certified-operators-5c98g" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.617932 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14544b4d-bde9-4481-abad-20b1d1c14d72-catalog-content\") pod \"certified-operators-5c98g\" (UID: \"14544b4d-bde9-4481-abad-20b1d1c14d72\") " pod="openshift-marketplace/certified-operators-5c98g" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.635770 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mlfc\" (UniqueName: \"kubernetes.io/projected/14544b4d-bde9-4481-abad-20b1d1c14d72-kube-api-access-4mlfc\") pod \"certified-operators-5c98g\" (UID: \"14544b4d-bde9-4481-abad-20b1d1c14d72\") " pod="openshift-marketplace/certified-operators-5c98g" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.638433 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7c95t" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.666745 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fmlxr"] Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.669434 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmlxr" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.681650 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fmlxr"] Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.718604 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80dcfad1-67ed-4289-93e7-e5fcbfd3682d-utilities\") pod \"community-operators-fmlxr\" (UID: \"80dcfad1-67ed-4289-93e7-e5fcbfd3682d\") " pod="openshift-marketplace/community-operators-fmlxr" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.719137 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz86s\" (UniqueName: \"kubernetes.io/projected/80dcfad1-67ed-4289-93e7-e5fcbfd3682d-kube-api-access-nz86s\") pod \"community-operators-fmlxr\" (UID: \"80dcfad1-67ed-4289-93e7-e5fcbfd3682d\") " pod="openshift-marketplace/community-operators-fmlxr" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.719185 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80dcfad1-67ed-4289-93e7-e5fcbfd3682d-catalog-content\") pod \"community-operators-fmlxr\" (UID: \"80dcfad1-67ed-4289-93e7-e5fcbfd3682d\") " pod="openshift-marketplace/community-operators-fmlxr" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.809415 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5c98g" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.820452 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80dcfad1-67ed-4289-93e7-e5fcbfd3682d-utilities\") pod \"community-operators-fmlxr\" (UID: \"80dcfad1-67ed-4289-93e7-e5fcbfd3682d\") " pod="openshift-marketplace/community-operators-fmlxr" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.820537 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz86s\" (UniqueName: \"kubernetes.io/projected/80dcfad1-67ed-4289-93e7-e5fcbfd3682d-kube-api-access-nz86s\") pod \"community-operators-fmlxr\" (UID: \"80dcfad1-67ed-4289-93e7-e5fcbfd3682d\") " pod="openshift-marketplace/community-operators-fmlxr" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.820568 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80dcfad1-67ed-4289-93e7-e5fcbfd3682d-catalog-content\") pod \"community-operators-fmlxr\" (UID: \"80dcfad1-67ed-4289-93e7-e5fcbfd3682d\") " pod="openshift-marketplace/community-operators-fmlxr" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.821048 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80dcfad1-67ed-4289-93e7-e5fcbfd3682d-catalog-content\") pod \"community-operators-fmlxr\" (UID: \"80dcfad1-67ed-4289-93e7-e5fcbfd3682d\") " pod="openshift-marketplace/community-operators-fmlxr" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.821306 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80dcfad1-67ed-4289-93e7-e5fcbfd3682d-utilities\") pod \"community-operators-fmlxr\" (UID: \"80dcfad1-67ed-4289-93e7-e5fcbfd3682d\") " pod="openshift-marketplace/community-operators-fmlxr" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.845104 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz86s\" (UniqueName: \"kubernetes.io/projected/80dcfad1-67ed-4289-93e7-e5fcbfd3682d-kube-api-access-nz86s\") pod \"community-operators-fmlxr\" (UID: \"80dcfad1-67ed-4289-93e7-e5fcbfd3682d\") " pod="openshift-marketplace/community-operators-fmlxr" Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.900371 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7c95t"] Nov 28 11:55:27 crc kubenswrapper[5030]: W1128 11:55:27.919973 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode626568d_b431_46f4_ad61_429b99eec2a9.slice/crio-14ed36eee0980d5e624cd37c6ce7192979fbf31329e0369f80c2fa7846b7c27b WatchSource:0}: Error finding container 14ed36eee0980d5e624cd37c6ce7192979fbf31329e0369f80c2fa7846b7c27b: Status 404 returned error can't find the container with id 14ed36eee0980d5e624cd37c6ce7192979fbf31329e0369f80c2fa7846b7c27b Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.920731 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lq47d"] Nov 28 11:55:27 crc kubenswrapper[5030]: W1128 11:55:27.923947 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1cfd735_7a89_4c9e_ace8_2dcb35cfed9c.slice/crio-9a024ba6a11194360bb164885db77a7534f05e3fda5de33a9ada2f82d1f3a97e WatchSource:0}: Error finding container 9a024ba6a11194360bb164885db77a7534f05e3fda5de33a9ada2f82d1f3a97e: Status 404 returned error can't find the container with id 9a024ba6a11194360bb164885db77a7534f05e3fda5de33a9ada2f82d1f3a97e Nov 28 11:55:27 crc kubenswrapper[5030]: I1128 11:55:27.996316 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmlxr" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.036712 5030 patch_prober.go:28] interesting pod/router-default-5444994796-dz6n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:55:28 crc kubenswrapper[5030]: [-]has-synced failed: reason withheld Nov 28 11:55:28 crc kubenswrapper[5030]: [+]process-running ok Nov 28 11:55:28 crc kubenswrapper[5030]: healthz check failed Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.036782 5030 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz6n5" podUID="273c4d4b-6972-435b-9fda-e802384dffd2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.078922 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5c98g"] Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.088095 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" event={"ID":"0623247c-d46a-4e16-8731-cdd6d2f4a16a","Type":"ContainerStarted","Data":"1de33d8736c33cf584e69d765e0ce7d955aa6c4789344f35f43a3bc15ef2362e"} Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.090069 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq47d" event={"ID":"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c","Type":"ContainerStarted","Data":"9a024ba6a11194360bb164885db77a7534f05e3fda5de33a9ada2f82d1f3a97e"} Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.101668 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c95t" event={"ID":"e626568d-b431-46f4-ad61-429b99eec2a9","Type":"ContainerStarted","Data":"14ed36eee0980d5e624cd37c6ce7192979fbf31329e0369f80c2fa7846b7c27b"} Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.166496 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.173617 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.175384 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.178262 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.179436 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.227786 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2750765e-4cab-48f2-accf-dcba57a535da-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2750765e-4cab-48f2-accf-dcba57a535da\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.227858 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2750765e-4cab-48f2-accf-dcba57a535da-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2750765e-4cab-48f2-accf-dcba57a535da\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.287853 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fmlxr"] Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.329829 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.329886 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.329909 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2750765e-4cab-48f2-accf-dcba57a535da-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2750765e-4cab-48f2-accf-dcba57a535da\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.329930 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.329969 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2750765e-4cab-48f2-accf-dcba57a535da-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2750765e-4cab-48f2-accf-dcba57a535da\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.330008 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.334151 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.334235 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2750765e-4cab-48f2-accf-dcba57a535da-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2750765e-4cab-48f2-accf-dcba57a535da\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.352112 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.352530 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.352895 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.356065 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2750765e-4cab-48f2-accf-dcba57a535da-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2750765e-4cab-48f2-accf-dcba57a535da\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.399136 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.453074 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.521816 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.533727 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.543886 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 11:55:28 crc kubenswrapper[5030]: I1128 11:55:28.896079 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 11:55:28 crc kubenswrapper[5030]: W1128 11:55:28.948775 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-d54ed2b51eb950c9a0d33ef636198aa6a4ee35366c8f42bdd84f36854211e646 WatchSource:0}: Error finding container d54ed2b51eb950c9a0d33ef636198aa6a4ee35366c8f42bdd84f36854211e646: Status 404 returned error can't find the container with id d54ed2b51eb950c9a0d33ef636198aa6a4ee35366c8f42bdd84f36854211e646 Nov 28 11:55:29 crc kubenswrapper[5030]: W1128 11:55:29.013564 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-41c0c3068e6b6458e2d3399f28ce7898b211375f202e578ff26f2abb15731134 WatchSource:0}: Error finding container 41c0c3068e6b6458e2d3399f28ce7898b211375f202e578ff26f2abb15731134: Status 404 returned error can't find the container with id 41c0c3068e6b6458e2d3399f28ce7898b211375f202e578ff26f2abb15731134 Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.038286 5030 patch_prober.go:28] interesting pod/router-default-5444994796-dz6n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:55:29 crc kubenswrapper[5030]: [-]has-synced failed: reason withheld Nov 28 11:55:29 crc kubenswrapper[5030]: [+]process-running ok Nov 28 11:55:29 crc kubenswrapper[5030]: healthz check failed Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.038350 5030 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz6n5" podUID="273c4d4b-6972-435b-9fda-e802384dffd2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.081516 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.086515 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.094183 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.106958 5030 generic.go:334] "Generic (PLEG): container finished" podID="80dcfad1-67ed-4289-93e7-e5fcbfd3682d" containerID="c625a62e34ca8cd8494b9134d9dd4a849ad187a433e3718b138f76db4f5f43be" exitCode=0 Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.107013 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmlxr" event={"ID":"80dcfad1-67ed-4289-93e7-e5fcbfd3682d","Type":"ContainerDied","Data":"c625a62e34ca8cd8494b9134d9dd4a849ad187a433e3718b138f76db4f5f43be"} Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.107035 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmlxr" event={"ID":"80dcfad1-67ed-4289-93e7-e5fcbfd3682d","Type":"ContainerStarted","Data":"84bee1b2f2b1b660cadbfda53e0fe4c1932b35a4e57a0b7b76fcb36f6d83b2c3"} Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.110512 5030 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.118339 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d54ed2b51eb950c9a0d33ef636198aa6a4ee35366c8f42bdd84f36854211e646"} Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.133200 5030 generic.go:334] "Generic (PLEG): container finished" podID="e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c" containerID="2e94a3ba737c0befff221395a03fb9f02362e4be5532a815279481638d5592ac" exitCode=0 Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.133331 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq47d" event={"ID":"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c","Type":"ContainerDied","Data":"2e94a3ba737c0befff221395a03fb9f02362e4be5532a815279481638d5592ac"} Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.135217 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.135292 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.138184 5030 generic.go:334] "Generic (PLEG): container finished" podID="e626568d-b431-46f4-ad61-429b99eec2a9" containerID="b3837ce42dec3d4b3b048299e2dc095f9f7cc0f4ba6b6655c3679b67765694b9" exitCode=0 Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.138436 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c95t" event={"ID":"e626568d-b431-46f4-ad61-429b99eec2a9","Type":"ContainerDied","Data":"b3837ce42dec3d4b3b048299e2dc095f9f7cc0f4ba6b6655c3679b67765694b9"} Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.146077 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" event={"ID":"0623247c-d46a-4e16-8731-cdd6d2f4a16a","Type":"ContainerStarted","Data":"73183920bd6fa19e29d2f466bc1cdbc5a3ab87d4c47f43c378252276ee0a5dbc"} Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.147089 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.151864 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.159809 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2750765e-4cab-48f2-accf-dcba57a535da","Type":"ContainerStarted","Data":"e083fcd545df8100e82c88306776e3d9326aae150466ce3fa99146d942fe59fb"} Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.165664 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"41c0c3068e6b6458e2d3399f28ce7898b211375f202e578ff26f2abb15731134"} Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.167743 5030 generic.go:334] "Generic (PLEG): container finished" podID="14544b4d-bde9-4481-abad-20b1d1c14d72" containerID="71abfae994803b80ca932a4b7a9b7cb229c45ee5e93ec0956a8f3340520ed085" exitCode=0 Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.167990 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5c98g" event={"ID":"14544b4d-bde9-4481-abad-20b1d1c14d72","Type":"ContainerDied","Data":"71abfae994803b80ca932a4b7a9b7cb229c45ee5e93ec0956a8f3340520ed085"} Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.168031 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5c98g" event={"ID":"14544b4d-bde9-4481-abad-20b1d1c14d72","Type":"ContainerStarted","Data":"c34d2b87aacc50ccd5b54e2222653aa9292e9a43835a7fa8204f995ac2044c30"} Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.175801 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b4rgr" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.205948 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" podStartSLOduration=128.205928291 podStartE2EDuration="2m8.205928291s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:29.183813226 +0000 UTC m=+147.125555909" watchObservedRunningTime="2025-11-28 11:55:29.205928291 +0000 UTC m=+147.147670984" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.266088 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5md7x"] Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.279822 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5md7x" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.281780 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.288949 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5md7x"] Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.445356 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778bafea-1fde-45d3-aa84-612f3cbe06ba-utilities\") pod \"redhat-marketplace-5md7x\" (UID: \"778bafea-1fde-45d3-aa84-612f3cbe06ba\") " pod="openshift-marketplace/redhat-marketplace-5md7x" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.445465 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778bafea-1fde-45d3-aa84-612f3cbe06ba-catalog-content\") pod \"redhat-marketplace-5md7x\" (UID: \"778bafea-1fde-45d3-aa84-612f3cbe06ba\") " pod="openshift-marketplace/redhat-marketplace-5md7x" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.446054 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmphm\" (UniqueName: \"kubernetes.io/projected/778bafea-1fde-45d3-aa84-612f3cbe06ba-kube-api-access-fmphm\") pod \"redhat-marketplace-5md7x\" (UID: \"778bafea-1fde-45d3-aa84-612f3cbe06ba\") " pod="openshift-marketplace/redhat-marketplace-5md7x" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.509482 5030 patch_prober.go:28] interesting pod/downloads-7954f5f757-l6ggh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.509509 5030 patch_prober.go:28] interesting pod/downloads-7954f5f757-l6ggh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.509573 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l6ggh" podUID="1d2a9f2a-efa6-4d3a-b9ec-2d4b40376fc7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.509588 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l6ggh" podUID="1d2a9f2a-efa6-4d3a-b9ec-2d4b40376fc7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.547792 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778bafea-1fde-45d3-aa84-612f3cbe06ba-catalog-content\") pod \"redhat-marketplace-5md7x\" (UID: \"778bafea-1fde-45d3-aa84-612f3cbe06ba\") " pod="openshift-marketplace/redhat-marketplace-5md7x" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.547939 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmphm\" (UniqueName: \"kubernetes.io/projected/778bafea-1fde-45d3-aa84-612f3cbe06ba-kube-api-access-fmphm\") pod \"redhat-marketplace-5md7x\" (UID: \"778bafea-1fde-45d3-aa84-612f3cbe06ba\") " pod="openshift-marketplace/redhat-marketplace-5md7x" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.547989 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778bafea-1fde-45d3-aa84-612f3cbe06ba-utilities\") pod \"redhat-marketplace-5md7x\" (UID: \"778bafea-1fde-45d3-aa84-612f3cbe06ba\") " pod="openshift-marketplace/redhat-marketplace-5md7x" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.549259 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778bafea-1fde-45d3-aa84-612f3cbe06ba-utilities\") pod \"redhat-marketplace-5md7x\" (UID: \"778bafea-1fde-45d3-aa84-612f3cbe06ba\") " pod="openshift-marketplace/redhat-marketplace-5md7x" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.549410 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778bafea-1fde-45d3-aa84-612f3cbe06ba-catalog-content\") pod \"redhat-marketplace-5md7x\" (UID: \"778bafea-1fde-45d3-aa84-612f3cbe06ba\") " pod="openshift-marketplace/redhat-marketplace-5md7x" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.571655 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmphm\" (UniqueName: \"kubernetes.io/projected/778bafea-1fde-45d3-aa84-612f3cbe06ba-kube-api-access-fmphm\") pod \"redhat-marketplace-5md7x\" (UID: \"778bafea-1fde-45d3-aa84-612f3cbe06ba\") " pod="openshift-marketplace/redhat-marketplace-5md7x" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.586913 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.586972 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.589534 5030 patch_prober.go:28] interesting pod/console-f9d7485db-4r4f7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.589635 5030 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4r4f7" podUID="5e0cbf40-e788-44c2-9eba-ddd17d412551" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.670137 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-shrxp"] Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.671525 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-shrxp" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.683912 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5md7x" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.685438 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-shrxp"] Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.751366 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/948d21f6-477d-4ea8-bc55-e4e061ae2284-utilities\") pod \"redhat-marketplace-shrxp\" (UID: \"948d21f6-477d-4ea8-bc55-e4e061ae2284\") " pod="openshift-marketplace/redhat-marketplace-shrxp" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.751515 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/948d21f6-477d-4ea8-bc55-e4e061ae2284-catalog-content\") pod \"redhat-marketplace-shrxp\" (UID: \"948d21f6-477d-4ea8-bc55-e4e061ae2284\") " pod="openshift-marketplace/redhat-marketplace-shrxp" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.751587 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6r2c\" (UniqueName: \"kubernetes.io/projected/948d21f6-477d-4ea8-bc55-e4e061ae2284-kube-api-access-b6r2c\") pod \"redhat-marketplace-shrxp\" (UID: \"948d21f6-477d-4ea8-bc55-e4e061ae2284\") " pod="openshift-marketplace/redhat-marketplace-shrxp" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.852891 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/948d21f6-477d-4ea8-bc55-e4e061ae2284-catalog-content\") pod \"redhat-marketplace-shrxp\" (UID: \"948d21f6-477d-4ea8-bc55-e4e061ae2284\") " pod="openshift-marketplace/redhat-marketplace-shrxp" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.852967 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6r2c\" (UniqueName: \"kubernetes.io/projected/948d21f6-477d-4ea8-bc55-e4e061ae2284-kube-api-access-b6r2c\") pod \"redhat-marketplace-shrxp\" (UID: \"948d21f6-477d-4ea8-bc55-e4e061ae2284\") " pod="openshift-marketplace/redhat-marketplace-shrxp" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.853000 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/948d21f6-477d-4ea8-bc55-e4e061ae2284-utilities\") pod \"redhat-marketplace-shrxp\" (UID: \"948d21f6-477d-4ea8-bc55-e4e061ae2284\") " pod="openshift-marketplace/redhat-marketplace-shrxp" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.853743 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/948d21f6-477d-4ea8-bc55-e4e061ae2284-utilities\") pod \"redhat-marketplace-shrxp\" (UID: \"948d21f6-477d-4ea8-bc55-e4e061ae2284\") " pod="openshift-marketplace/redhat-marketplace-shrxp" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.853827 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/948d21f6-477d-4ea8-bc55-e4e061ae2284-catalog-content\") pod \"redhat-marketplace-shrxp\" (UID: \"948d21f6-477d-4ea8-bc55-e4e061ae2284\") " pod="openshift-marketplace/redhat-marketplace-shrxp" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.874500 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6r2c\" (UniqueName: \"kubernetes.io/projected/948d21f6-477d-4ea8-bc55-e4e061ae2284-kube-api-access-b6r2c\") pod \"redhat-marketplace-shrxp\" (UID: \"948d21f6-477d-4ea8-bc55-e4e061ae2284\") " pod="openshift-marketplace/redhat-marketplace-shrxp" Nov 28 11:55:29 crc kubenswrapper[5030]: I1128 11:55:29.996905 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-shrxp" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.028590 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.032543 5030 patch_prober.go:28] interesting pod/router-default-5444994796-dz6n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:55:30 crc kubenswrapper[5030]: [-]has-synced failed: reason withheld Nov 28 11:55:30 crc kubenswrapper[5030]: [+]process-running ok Nov 28 11:55:30 crc kubenswrapper[5030]: healthz check failed Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.032590 5030 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz6n5" podUID="273c4d4b-6972-435b-9fda-e802384dffd2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.128971 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5md7x"] Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.144291 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" Nov 28 11:55:30 crc kubenswrapper[5030]: W1128 11:55:30.168590 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod778bafea_1fde_45d3_aa84_612f3cbe06ba.slice/crio-0c77aab3c0595c371ec2280e2a643c2993aa2133917cbf46cb3275a4bd133e01 WatchSource:0}: Error finding container 0c77aab3c0595c371ec2280e2a643c2993aa2133917cbf46cb3275a4bd133e01: Status 404 returned error can't find the container with id 0c77aab3c0595c371ec2280e2a643c2993aa2133917cbf46cb3275a4bd133e01 Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.238362 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"648e75dbcf89130551d47991a37a138bab232a14aeb9a2a6e22da266d1d61806"} Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.262724 5030 generic.go:334] "Generic (PLEG): container finished" podID="2750765e-4cab-48f2-accf-dcba57a535da" containerID="f0f560acc611849c7a2f966dd3792195f2e35a806b134b013223e519cf31c7ce" exitCode=0 Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.262818 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2750765e-4cab-48f2-accf-dcba57a535da","Type":"ContainerDied","Data":"f0f560acc611849c7a2f966dd3792195f2e35a806b134b013223e519cf31c7ce"} Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.266475 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b279ec1fc1320c4dafa26dd6c59f8a0007f9ac358147b27c7b254849d2f99ed5"} Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.276806 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6d56a1ce5db39851f8582f21b7d869e487e2f649d12b2209a7b67ab35e8a3dfa"} Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.276844 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"73fe439b01b764a3986e91d5e7dfbc397e3c8739c25b2d004c190cddb58a55e5"} Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.277343 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.279017 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-spqtx"] Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.281585 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spqtx" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.284505 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-mnz5b" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.285030 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.305731 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-spqtx"] Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.449957 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-shrxp"] Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.470266 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vp5l\" (UniqueName: \"kubernetes.io/projected/b790b1a3-16d7-498a-8f14-36e52122ad9b-kube-api-access-5vp5l\") pod \"redhat-operators-spqtx\" (UID: \"b790b1a3-16d7-498a-8f14-36e52122ad9b\") " pod="openshift-marketplace/redhat-operators-spqtx" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.470338 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b790b1a3-16d7-498a-8f14-36e52122ad9b-utilities\") pod \"redhat-operators-spqtx\" (UID: \"b790b1a3-16d7-498a-8f14-36e52122ad9b\") " pod="openshift-marketplace/redhat-operators-spqtx" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.470535 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b790b1a3-16d7-498a-8f14-36e52122ad9b-catalog-content\") pod \"redhat-operators-spqtx\" (UID: \"b790b1a3-16d7-498a-8f14-36e52122ad9b\") " pod="openshift-marketplace/redhat-operators-spqtx" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.572661 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b790b1a3-16d7-498a-8f14-36e52122ad9b-catalog-content\") pod \"redhat-operators-spqtx\" (UID: \"b790b1a3-16d7-498a-8f14-36e52122ad9b\") " pod="openshift-marketplace/redhat-operators-spqtx" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.572768 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vp5l\" (UniqueName: \"kubernetes.io/projected/b790b1a3-16d7-498a-8f14-36e52122ad9b-kube-api-access-5vp5l\") pod \"redhat-operators-spqtx\" (UID: \"b790b1a3-16d7-498a-8f14-36e52122ad9b\") " pod="openshift-marketplace/redhat-operators-spqtx" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.572804 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b790b1a3-16d7-498a-8f14-36e52122ad9b-utilities\") pod \"redhat-operators-spqtx\" (UID: \"b790b1a3-16d7-498a-8f14-36e52122ad9b\") " pod="openshift-marketplace/redhat-operators-spqtx" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.573314 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b790b1a3-16d7-498a-8f14-36e52122ad9b-utilities\") pod \"redhat-operators-spqtx\" (UID: \"b790b1a3-16d7-498a-8f14-36e52122ad9b\") " pod="openshift-marketplace/redhat-operators-spqtx" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.573316 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b790b1a3-16d7-498a-8f14-36e52122ad9b-catalog-content\") pod \"redhat-operators-spqtx\" (UID: \"b790b1a3-16d7-498a-8f14-36e52122ad9b\") " pod="openshift-marketplace/redhat-operators-spqtx" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.614223 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vp5l\" (UniqueName: \"kubernetes.io/projected/b790b1a3-16d7-498a-8f14-36e52122ad9b-kube-api-access-5vp5l\") pod \"redhat-operators-spqtx\" (UID: \"b790b1a3-16d7-498a-8f14-36e52122ad9b\") " pod="openshift-marketplace/redhat-operators-spqtx" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.640058 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spqtx" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.702257 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n4rms"] Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.703315 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n4rms" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.724121 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n4rms"] Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.876574 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4x6m\" (UniqueName: \"kubernetes.io/projected/b339c3d5-ab2d-4b8f-958c-14a90aa2bd79-kube-api-access-q4x6m\") pod \"redhat-operators-n4rms\" (UID: \"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79\") " pod="openshift-marketplace/redhat-operators-n4rms" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.877091 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b339c3d5-ab2d-4b8f-958c-14a90aa2bd79-utilities\") pod \"redhat-operators-n4rms\" (UID: \"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79\") " pod="openshift-marketplace/redhat-operators-n4rms" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.877119 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b339c3d5-ab2d-4b8f-958c-14a90aa2bd79-catalog-content\") pod \"redhat-operators-n4rms\" (UID: \"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79\") " pod="openshift-marketplace/redhat-operators-n4rms" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.981586 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4x6m\" (UniqueName: \"kubernetes.io/projected/b339c3d5-ab2d-4b8f-958c-14a90aa2bd79-kube-api-access-q4x6m\") pod \"redhat-operators-n4rms\" (UID: \"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79\") " pod="openshift-marketplace/redhat-operators-n4rms" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.981685 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b339c3d5-ab2d-4b8f-958c-14a90aa2bd79-utilities\") pod \"redhat-operators-n4rms\" (UID: \"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79\") " pod="openshift-marketplace/redhat-operators-n4rms" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.981715 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b339c3d5-ab2d-4b8f-958c-14a90aa2bd79-catalog-content\") pod \"redhat-operators-n4rms\" (UID: \"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79\") " pod="openshift-marketplace/redhat-operators-n4rms" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.982288 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b339c3d5-ab2d-4b8f-958c-14a90aa2bd79-catalog-content\") pod \"redhat-operators-n4rms\" (UID: \"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79\") " pod="openshift-marketplace/redhat-operators-n4rms" Nov 28 11:55:30 crc kubenswrapper[5030]: I1128 11:55:30.982459 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b339c3d5-ab2d-4b8f-958c-14a90aa2bd79-utilities\") pod \"redhat-operators-n4rms\" (UID: \"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79\") " pod="openshift-marketplace/redhat-operators-n4rms" Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.010688 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4x6m\" (UniqueName: \"kubernetes.io/projected/b339c3d5-ab2d-4b8f-958c-14a90aa2bd79-kube-api-access-q4x6m\") pod \"redhat-operators-n4rms\" (UID: \"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79\") " pod="openshift-marketplace/redhat-operators-n4rms" Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.032346 5030 patch_prober.go:28] interesting pod/router-default-5444994796-dz6n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 11:55:31 crc kubenswrapper[5030]: [-]has-synced failed: reason withheld Nov 28 11:55:31 crc kubenswrapper[5030]: [+]process-running ok Nov 28 11:55:31 crc kubenswrapper[5030]: healthz check failed Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.032539 5030 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz6n5" podUID="273c4d4b-6972-435b-9fda-e802384dffd2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.089178 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n4rms" Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.307540 5030 generic.go:334] "Generic (PLEG): container finished" podID="948d21f6-477d-4ea8-bc55-e4e061ae2284" containerID="389d5b2f2e05d02eca966da40e9ff0d7107e130e9f54d66fdba1ee283e6c9b2f" exitCode=0 Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.307614 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-shrxp" event={"ID":"948d21f6-477d-4ea8-bc55-e4e061ae2284","Type":"ContainerDied","Data":"389d5b2f2e05d02eca966da40e9ff0d7107e130e9f54d66fdba1ee283e6c9b2f"} Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.308087 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-shrxp" event={"ID":"948d21f6-477d-4ea8-bc55-e4e061ae2284","Type":"ContainerStarted","Data":"86b4de5ac8e120ae3b84d63d35d91aaca6a4e1cb51744f54cc3dc2583cec8366"} Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.331195 5030 generic.go:334] "Generic (PLEG): container finished" podID="778bafea-1fde-45d3-aa84-612f3cbe06ba" containerID="305c1783fc5bcbc7b5d2fb81a2508406eef5848a167964637b54ff482cc6992c" exitCode=0 Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.331242 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5md7x" event={"ID":"778bafea-1fde-45d3-aa84-612f3cbe06ba","Type":"ContainerDied","Data":"305c1783fc5bcbc7b5d2fb81a2508406eef5848a167964637b54ff482cc6992c"} Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.331317 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5md7x" event={"ID":"778bafea-1fde-45d3-aa84-612f3cbe06ba","Type":"ContainerStarted","Data":"0c77aab3c0595c371ec2280e2a643c2993aa2133917cbf46cb3275a4bd133e01"} Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.334876 5030 generic.go:334] "Generic (PLEG): container finished" podID="89d4a423-452d-4b92-927e-38eadd969e03" containerID="36ad4e43dd9f48d425f297a99dbb116a383665057c1470f6cd260392d039a1b8" exitCode=0 Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.334930 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw" event={"ID":"89d4a423-452d-4b92-927e-38eadd969e03","Type":"ContainerDied","Data":"36ad4e43dd9f48d425f297a99dbb116a383665057c1470f6cd260392d039a1b8"} Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.345122 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-spqtx"] Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.424198 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n4rms"] Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.664796 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.795081 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2750765e-4cab-48f2-accf-dcba57a535da-kubelet-dir\") pod \"2750765e-4cab-48f2-accf-dcba57a535da\" (UID: \"2750765e-4cab-48f2-accf-dcba57a535da\") " Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.795273 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2750765e-4cab-48f2-accf-dcba57a535da-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2750765e-4cab-48f2-accf-dcba57a535da" (UID: "2750765e-4cab-48f2-accf-dcba57a535da"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.798096 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2750765e-4cab-48f2-accf-dcba57a535da-kube-api-access\") pod \"2750765e-4cab-48f2-accf-dcba57a535da\" (UID: \"2750765e-4cab-48f2-accf-dcba57a535da\") " Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.798457 5030 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2750765e-4cab-48f2-accf-dcba57a535da-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.804447 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2750765e-4cab-48f2-accf-dcba57a535da-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2750765e-4cab-48f2-accf-dcba57a535da" (UID: "2750765e-4cab-48f2-accf-dcba57a535da"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.833589 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 11:55:31 crc kubenswrapper[5030]: E1128 11:55:31.833821 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2750765e-4cab-48f2-accf-dcba57a535da" containerName="pruner" Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.833834 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="2750765e-4cab-48f2-accf-dcba57a535da" containerName="pruner" Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.833985 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="2750765e-4cab-48f2-accf-dcba57a535da" containerName="pruner" Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.835353 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.837529 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.838253 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.841194 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 11:55:31 crc kubenswrapper[5030]: I1128 11:55:31.899786 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2750765e-4cab-48f2-accf-dcba57a535da-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.003418 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b4950f3-f18c-46db-983d-4b140bdfc86b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4b4950f3-f18c-46db-983d-4b140bdfc86b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.003549 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b4950f3-f18c-46db-983d-4b140bdfc86b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4b4950f3-f18c-46db-983d-4b140bdfc86b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.031727 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.037225 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-dz6n5" Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.115954 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b4950f3-f18c-46db-983d-4b140bdfc86b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4b4950f3-f18c-46db-983d-4b140bdfc86b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.116215 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b4950f3-f18c-46db-983d-4b140bdfc86b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4b4950f3-f18c-46db-983d-4b140bdfc86b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.117667 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b4950f3-f18c-46db-983d-4b140bdfc86b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4b4950f3-f18c-46db-983d-4b140bdfc86b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.136892 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b4950f3-f18c-46db-983d-4b140bdfc86b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4b4950f3-f18c-46db-983d-4b140bdfc86b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.163777 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.364216 5030 generic.go:334] "Generic (PLEG): container finished" podID="b790b1a3-16d7-498a-8f14-36e52122ad9b" containerID="f4fffeea4ac7db9753f2df4255462868fd5a2f8192fc10648bf7284203223a94" exitCode=0 Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.364448 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spqtx" event={"ID":"b790b1a3-16d7-498a-8f14-36e52122ad9b","Type":"ContainerDied","Data":"f4fffeea4ac7db9753f2df4255462868fd5a2f8192fc10648bf7284203223a94"} Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.364861 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spqtx" event={"ID":"b790b1a3-16d7-498a-8f14-36e52122ad9b","Type":"ContainerStarted","Data":"872c61d63d51b04903960e26b1765b601be06d6ca42fa7db82ab56ec0952891f"} Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.380181 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.380675 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2750765e-4cab-48f2-accf-dcba57a535da","Type":"ContainerDied","Data":"e083fcd545df8100e82c88306776e3d9326aae150466ce3fa99146d942fe59fb"} Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.380724 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e083fcd545df8100e82c88306776e3d9326aae150466ce3fa99146d942fe59fb" Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.416073 5030 generic.go:334] "Generic (PLEG): container finished" podID="b339c3d5-ab2d-4b8f-958c-14a90aa2bd79" containerID="cf1fd3ad49b0181c41184e575c662bc913238193e5bee6f1f11431e4e09683cc" exitCode=0 Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.432330 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4rms" event={"ID":"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79","Type":"ContainerDied","Data":"cf1fd3ad49b0181c41184e575c662bc913238193e5bee6f1f11431e4e09683cc"} Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.432381 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4rms" event={"ID":"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79","Type":"ContainerStarted","Data":"19e9fb2a4c1b696c65144010315e2db9a23bf34d220dcb98a265c58abd1a0c7c"} Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.459935 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.788525 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw" Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.925245 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns7vs\" (UniqueName: \"kubernetes.io/projected/89d4a423-452d-4b92-927e-38eadd969e03-kube-api-access-ns7vs\") pod \"89d4a423-452d-4b92-927e-38eadd969e03\" (UID: \"89d4a423-452d-4b92-927e-38eadd969e03\") " Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.925314 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89d4a423-452d-4b92-927e-38eadd969e03-config-volume\") pod \"89d4a423-452d-4b92-927e-38eadd969e03\" (UID: \"89d4a423-452d-4b92-927e-38eadd969e03\") " Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.925480 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89d4a423-452d-4b92-927e-38eadd969e03-secret-volume\") pod \"89d4a423-452d-4b92-927e-38eadd969e03\" (UID: \"89d4a423-452d-4b92-927e-38eadd969e03\") " Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.931693 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89d4a423-452d-4b92-927e-38eadd969e03-config-volume" (OuterVolumeSpecName: "config-volume") pod "89d4a423-452d-4b92-927e-38eadd969e03" (UID: "89d4a423-452d-4b92-927e-38eadd969e03"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.936239 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89d4a423-452d-4b92-927e-38eadd969e03-kube-api-access-ns7vs" (OuterVolumeSpecName: "kube-api-access-ns7vs") pod "89d4a423-452d-4b92-927e-38eadd969e03" (UID: "89d4a423-452d-4b92-927e-38eadd969e03"). InnerVolumeSpecName "kube-api-access-ns7vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:55:32 crc kubenswrapper[5030]: I1128 11:55:32.941286 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89d4a423-452d-4b92-927e-38eadd969e03-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "89d4a423-452d-4b92-927e-38eadd969e03" (UID: "89d4a423-452d-4b92-927e-38eadd969e03"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:55:33 crc kubenswrapper[5030]: I1128 11:55:33.030102 5030 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89d4a423-452d-4b92-927e-38eadd969e03-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 11:55:33 crc kubenswrapper[5030]: I1128 11:55:33.030126 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns7vs\" (UniqueName: \"kubernetes.io/projected/89d4a423-452d-4b92-927e-38eadd969e03-kube-api-access-ns7vs\") on node \"crc\" DevicePath \"\"" Nov 28 11:55:33 crc kubenswrapper[5030]: I1128 11:55:33.030141 5030 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89d4a423-452d-4b92-927e-38eadd969e03-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 11:55:33 crc kubenswrapper[5030]: I1128 11:55:33.201768 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:55:33 crc kubenswrapper[5030]: I1128 11:55:33.201820 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:55:33 crc kubenswrapper[5030]: I1128 11:55:33.467671 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4b4950f3-f18c-46db-983d-4b140bdfc86b","Type":"ContainerStarted","Data":"2054618098e04517c8e6a5c7df228576867ec40775e90fb9de08b878ea581c7f"} Nov 28 11:55:33 crc kubenswrapper[5030]: I1128 11:55:33.471425 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw" event={"ID":"89d4a423-452d-4b92-927e-38eadd969e03","Type":"ContainerDied","Data":"d979ae4e7b2a17c2f2fd0e9b14d35c298d9dd031f79ee2462a6dcbaba58b2ed0"} Nov 28 11:55:33 crc kubenswrapper[5030]: I1128 11:55:33.471495 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d979ae4e7b2a17c2f2fd0e9b14d35c298d9dd031f79ee2462a6dcbaba58b2ed0" Nov 28 11:55:33 crc kubenswrapper[5030]: I1128 11:55:33.471563 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405505-2mvmw" Nov 28 11:55:34 crc kubenswrapper[5030]: I1128 11:55:34.490505 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4b4950f3-f18c-46db-983d-4b140bdfc86b","Type":"ContainerStarted","Data":"a13488a998abf28425cc47b234f1b6a6e65881855db4fcfc40cdc18d75677669"} Nov 28 11:55:34 crc kubenswrapper[5030]: I1128 11:55:34.511086 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.511057623 podStartE2EDuration="3.511057623s" podCreationTimestamp="2025-11-28 11:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:55:34.505320624 +0000 UTC m=+152.447063307" watchObservedRunningTime="2025-11-28 11:55:34.511057623 +0000 UTC m=+152.452800306" Nov 28 11:55:35 crc kubenswrapper[5030]: I1128 11:55:35.196658 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-g77wg" Nov 28 11:55:35 crc kubenswrapper[5030]: I1128 11:55:35.502872 5030 generic.go:334] "Generic (PLEG): container finished" podID="4b4950f3-f18c-46db-983d-4b140bdfc86b" containerID="a13488a998abf28425cc47b234f1b6a6e65881855db4fcfc40cdc18d75677669" exitCode=0 Nov 28 11:55:35 crc kubenswrapper[5030]: I1128 11:55:35.502952 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4b4950f3-f18c-46db-983d-4b140bdfc86b","Type":"ContainerDied","Data":"a13488a998abf28425cc47b234f1b6a6e65881855db4fcfc40cdc18d75677669"} Nov 28 11:55:39 crc kubenswrapper[5030]: I1128 11:55:39.521644 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-l6ggh" Nov 28 11:55:39 crc kubenswrapper[5030]: I1128 11:55:39.626572 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:39 crc kubenswrapper[5030]: I1128 11:55:39.631276 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-4r4f7" Nov 28 11:55:42 crc kubenswrapper[5030]: I1128 11:55:42.399994 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:55:42 crc kubenswrapper[5030]: I1128 11:55:42.461691 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b4950f3-f18c-46db-983d-4b140bdfc86b-kube-api-access\") pod \"4b4950f3-f18c-46db-983d-4b140bdfc86b\" (UID: \"4b4950f3-f18c-46db-983d-4b140bdfc86b\") " Nov 28 11:55:42 crc kubenswrapper[5030]: I1128 11:55:42.461907 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b4950f3-f18c-46db-983d-4b140bdfc86b-kubelet-dir\") pod \"4b4950f3-f18c-46db-983d-4b140bdfc86b\" (UID: \"4b4950f3-f18c-46db-983d-4b140bdfc86b\") " Nov 28 11:55:42 crc kubenswrapper[5030]: I1128 11:55:42.462060 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b4950f3-f18c-46db-983d-4b140bdfc86b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4b4950f3-f18c-46db-983d-4b140bdfc86b" (UID: "4b4950f3-f18c-46db-983d-4b140bdfc86b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:55:42 crc kubenswrapper[5030]: I1128 11:55:42.462299 5030 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b4950f3-f18c-46db-983d-4b140bdfc86b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:55:42 crc kubenswrapper[5030]: I1128 11:55:42.481724 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b4950f3-f18c-46db-983d-4b140bdfc86b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4b4950f3-f18c-46db-983d-4b140bdfc86b" (UID: "4b4950f3-f18c-46db-983d-4b140bdfc86b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:55:42 crc kubenswrapper[5030]: I1128 11:55:42.556187 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4b4950f3-f18c-46db-983d-4b140bdfc86b","Type":"ContainerDied","Data":"2054618098e04517c8e6a5c7df228576867ec40775e90fb9de08b878ea581c7f"} Nov 28 11:55:42 crc kubenswrapper[5030]: I1128 11:55:42.556271 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2054618098e04517c8e6a5c7df228576867ec40775e90fb9de08b878ea581c7f" Nov 28 11:55:42 crc kubenswrapper[5030]: I1128 11:55:42.556236 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 11:55:42 crc kubenswrapper[5030]: I1128 11:55:42.563639 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b4950f3-f18c-46db-983d-4b140bdfc86b-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 11:55:43 crc kubenswrapper[5030]: I1128 11:55:43.782667 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs\") pod \"network-metrics-daemon-zg94c\" (UID: \"a047de37-e5fb-49f1-8b34-94c084894e18\") " pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:55:43 crc kubenswrapper[5030]: I1128 11:55:43.788487 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a047de37-e5fb-49f1-8b34-94c084894e18-metrics-certs\") pod \"network-metrics-daemon-zg94c\" (UID: \"a047de37-e5fb-49f1-8b34-94c084894e18\") " pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:55:44 crc kubenswrapper[5030]: I1128 11:55:44.016965 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zg94c" Nov 28 11:55:46 crc kubenswrapper[5030]: I1128 11:55:46.936156 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:56:00 crc kubenswrapper[5030]: I1128 11:56:00.191268 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2lh2r" Nov 28 11:56:03 crc kubenswrapper[5030]: I1128 11:56:03.202610 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:56:03 crc kubenswrapper[5030]: I1128 11:56:03.203150 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:56:08 crc kubenswrapper[5030]: I1128 11:56:08.857019 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 11:56:09 crc kubenswrapper[5030]: I1128 11:56:09.442192 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 11:56:09 crc kubenswrapper[5030]: E1128 11:56:09.442529 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d4a423-452d-4b92-927e-38eadd969e03" containerName="collect-profiles" Nov 28 11:56:09 crc kubenswrapper[5030]: I1128 11:56:09.442544 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d4a423-452d-4b92-927e-38eadd969e03" containerName="collect-profiles" Nov 28 11:56:09 crc kubenswrapper[5030]: E1128 11:56:09.442571 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b4950f3-f18c-46db-983d-4b140bdfc86b" containerName="pruner" Nov 28 11:56:09 crc kubenswrapper[5030]: I1128 11:56:09.442577 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b4950f3-f18c-46db-983d-4b140bdfc86b" containerName="pruner" Nov 28 11:56:09 crc kubenswrapper[5030]: I1128 11:56:09.442699 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d4a423-452d-4b92-927e-38eadd969e03" containerName="collect-profiles" Nov 28 11:56:09 crc kubenswrapper[5030]: I1128 11:56:09.442732 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b4950f3-f18c-46db-983d-4b140bdfc86b" containerName="pruner" Nov 28 11:56:09 crc kubenswrapper[5030]: I1128 11:56:09.443299 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:56:09 crc kubenswrapper[5030]: I1128 11:56:09.446117 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 11:56:09 crc kubenswrapper[5030]: I1128 11:56:09.447551 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 11:56:09 crc kubenswrapper[5030]: I1128 11:56:09.449513 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 11:56:09 crc kubenswrapper[5030]: I1128 11:56:09.645402 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8ccfd-087d-4857-be87-9394c446a411-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9ed8ccfd-087d-4857-be87-9394c446a411\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:56:09 crc kubenswrapper[5030]: I1128 11:56:09.645509 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ed8ccfd-087d-4857-be87-9394c446a411-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9ed8ccfd-087d-4857-be87-9394c446a411\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:56:09 crc kubenswrapper[5030]: I1128 11:56:09.746871 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ed8ccfd-087d-4857-be87-9394c446a411-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9ed8ccfd-087d-4857-be87-9394c446a411\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:56:09 crc kubenswrapper[5030]: I1128 11:56:09.746945 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8ccfd-087d-4857-be87-9394c446a411-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9ed8ccfd-087d-4857-be87-9394c446a411\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:56:09 crc kubenswrapper[5030]: I1128 11:56:09.747022 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8ccfd-087d-4857-be87-9394c446a411-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9ed8ccfd-087d-4857-be87-9394c446a411\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:56:09 crc kubenswrapper[5030]: I1128 11:56:09.786685 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ed8ccfd-087d-4857-be87-9394c446a411-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9ed8ccfd-087d-4857-be87-9394c446a411\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:56:10 crc kubenswrapper[5030]: I1128 11:56:10.072586 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:56:12 crc kubenswrapper[5030]: E1128 11:56:12.634678 5030 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 28 11:56:12 crc kubenswrapper[5030]: E1128 11:56:12.635269 5030 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jszjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-7c95t_openshift-marketplace(e626568d-b431-46f4-ad61-429b99eec2a9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 11:56:12 crc kubenswrapper[5030]: E1128 11:56:12.636492 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-7c95t" podUID="e626568d-b431-46f4-ad61-429b99eec2a9" Nov 28 11:56:14 crc kubenswrapper[5030]: E1128 11:56:14.586067 5030 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 28 11:56:14 crc kubenswrapper[5030]: E1128 11:56:14.586354 5030 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nz86s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-fmlxr_openshift-marketplace(80dcfad1-67ed-4289-93e7-e5fcbfd3682d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 11:56:14 crc kubenswrapper[5030]: E1128 11:56:14.587577 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-fmlxr" podUID="80dcfad1-67ed-4289-93e7-e5fcbfd3682d" Nov 28 11:56:14 crc kubenswrapper[5030]: E1128 11:56:14.671364 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-7c95t" podUID="e626568d-b431-46f4-ad61-429b99eec2a9" Nov 28 11:56:14 crc kubenswrapper[5030]: E1128 11:56:14.844369 5030 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 28 11:56:14 crc kubenswrapper[5030]: E1128 11:56:14.845089 5030 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fmphm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-5md7x_openshift-marketplace(778bafea-1fde-45d3-aa84-612f3cbe06ba): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 11:56:14 crc kubenswrapper[5030]: E1128 11:56:14.846609 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-5md7x" podUID="778bafea-1fde-45d3-aa84-612f3cbe06ba" Nov 28 11:56:15 crc kubenswrapper[5030]: E1128 11:56:15.161502 5030 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 28 11:56:15 crc kubenswrapper[5030]: E1128 11:56:15.161764 5030 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b6r2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-shrxp_openshift-marketplace(948d21f6-477d-4ea8-bc55-e4e061ae2284): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 11:56:15 crc kubenswrapper[5030]: E1128 11:56:15.163056 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-shrxp" podUID="948d21f6-477d-4ea8-bc55-e4e061ae2284" Nov 28 11:56:16 crc kubenswrapper[5030]: I1128 11:56:16.026906 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 11:56:16 crc kubenswrapper[5030]: I1128 11:56:16.027702 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:56:16 crc kubenswrapper[5030]: I1128 11:56:16.039170 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 11:56:16 crc kubenswrapper[5030]: I1128 11:56:16.104735 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/646b709f-223b-4619-aff7-a5e8bcb29d88-kubelet-dir\") pod \"installer-9-crc\" (UID: \"646b709f-223b-4619-aff7-a5e8bcb29d88\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:56:16 crc kubenswrapper[5030]: I1128 11:56:16.104923 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/646b709f-223b-4619-aff7-a5e8bcb29d88-kube-api-access\") pod \"installer-9-crc\" (UID: \"646b709f-223b-4619-aff7-a5e8bcb29d88\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:56:16 crc kubenswrapper[5030]: I1128 11:56:16.105001 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/646b709f-223b-4619-aff7-a5e8bcb29d88-var-lock\") pod \"installer-9-crc\" (UID: \"646b709f-223b-4619-aff7-a5e8bcb29d88\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:56:16 crc kubenswrapper[5030]: I1128 11:56:16.206137 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/646b709f-223b-4619-aff7-a5e8bcb29d88-var-lock\") pod \"installer-9-crc\" (UID: \"646b709f-223b-4619-aff7-a5e8bcb29d88\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:56:16 crc kubenswrapper[5030]: I1128 11:56:16.206196 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/646b709f-223b-4619-aff7-a5e8bcb29d88-kubelet-dir\") pod \"installer-9-crc\" (UID: \"646b709f-223b-4619-aff7-a5e8bcb29d88\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:56:16 crc kubenswrapper[5030]: I1128 11:56:16.206261 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/646b709f-223b-4619-aff7-a5e8bcb29d88-kube-api-access\") pod \"installer-9-crc\" (UID: \"646b709f-223b-4619-aff7-a5e8bcb29d88\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:56:16 crc kubenswrapper[5030]: I1128 11:56:16.206266 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/646b709f-223b-4619-aff7-a5e8bcb29d88-var-lock\") pod \"installer-9-crc\" (UID: \"646b709f-223b-4619-aff7-a5e8bcb29d88\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:56:16 crc kubenswrapper[5030]: I1128 11:56:16.206408 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/646b709f-223b-4619-aff7-a5e8bcb29d88-kubelet-dir\") pod \"installer-9-crc\" (UID: \"646b709f-223b-4619-aff7-a5e8bcb29d88\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:56:16 crc kubenswrapper[5030]: I1128 11:56:16.234687 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/646b709f-223b-4619-aff7-a5e8bcb29d88-kube-api-access\") pod \"installer-9-crc\" (UID: \"646b709f-223b-4619-aff7-a5e8bcb29d88\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:56:16 crc kubenswrapper[5030]: I1128 11:56:16.365715 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:56:18 crc kubenswrapper[5030]: E1128 11:56:18.641352 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-5md7x" podUID="778bafea-1fde-45d3-aa84-612f3cbe06ba" Nov 28 11:56:18 crc kubenswrapper[5030]: E1128 11:56:18.641359 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-fmlxr" podUID="80dcfad1-67ed-4289-93e7-e5fcbfd3682d" Nov 28 11:56:18 crc kubenswrapper[5030]: E1128 11:56:18.641912 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-shrxp" podUID="948d21f6-477d-4ea8-bc55-e4e061ae2284" Nov 28 11:56:21 crc kubenswrapper[5030]: E1128 11:56:21.951897 5030 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 28 11:56:21 crc kubenswrapper[5030]: E1128 11:56:21.952652 5030 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vp5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-spqtx_openshift-marketplace(b790b1a3-16d7-498a-8f14-36e52122ad9b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 11:56:21 crc kubenswrapper[5030]: E1128 11:56:21.953789 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-spqtx" podUID="b790b1a3-16d7-498a-8f14-36e52122ad9b" Nov 28 11:56:21 crc kubenswrapper[5030]: E1128 11:56:21.964887 5030 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 28 11:56:21 crc kubenswrapper[5030]: E1128 11:56:21.965115 5030 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mlfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-5c98g_openshift-marketplace(14544b4d-bde9-4481-abad-20b1d1c14d72): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 11:56:21 crc kubenswrapper[5030]: E1128 11:56:21.967082 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-5c98g" podUID="14544b4d-bde9-4481-abad-20b1d1c14d72" Nov 28 11:56:21 crc kubenswrapper[5030]: E1128 11:56:21.984898 5030 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 28 11:56:21 crc kubenswrapper[5030]: E1128 11:56:21.985097 5030 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4x6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-n4rms_openshift-marketplace(b339c3d5-ab2d-4b8f-958c-14a90aa2bd79): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 11:56:21 crc kubenswrapper[5030]: E1128 11:56:21.987289 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-n4rms" podUID="b339c3d5-ab2d-4b8f-958c-14a90aa2bd79" Nov 28 11:56:22 crc kubenswrapper[5030]: E1128 11:56:22.001267 5030 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 28 11:56:22 crc kubenswrapper[5030]: E1128 11:56:22.002519 5030 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n8k5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-lq47d_openshift-marketplace(e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 11:56:22 crc kubenswrapper[5030]: E1128 11:56:22.006146 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-lq47d" podUID="e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c" Nov 28 11:56:22 crc kubenswrapper[5030]: I1128 11:56:22.342204 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-zg94c"] Nov 28 11:56:22 crc kubenswrapper[5030]: I1128 11:56:22.416213 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 11:56:22 crc kubenswrapper[5030]: I1128 11:56:22.432094 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 11:56:22 crc kubenswrapper[5030]: W1128 11:56:22.443606 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod646b709f_223b_4619_aff7_a5e8bcb29d88.slice/crio-d1147c272419b75ac6417652132dd0e02275ccd2e7de6dcaa335a98df794deb6 WatchSource:0}: Error finding container d1147c272419b75ac6417652132dd0e02275ccd2e7de6dcaa335a98df794deb6: Status 404 returned error can't find the container with id d1147c272419b75ac6417652132dd0e02275ccd2e7de6dcaa335a98df794deb6 Nov 28 11:56:22 crc kubenswrapper[5030]: I1128 11:56:22.845603 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9ed8ccfd-087d-4857-be87-9394c446a411","Type":"ContainerStarted","Data":"ba13c20abbe49107cbafb571b2f8f2d1313cc3dcb800e1433f615b49200710be"} Nov 28 11:56:22 crc kubenswrapper[5030]: I1128 11:56:22.846119 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9ed8ccfd-087d-4857-be87-9394c446a411","Type":"ContainerStarted","Data":"2bb07c87289c5677c5c6044662f487e98b096ea2efa0328aca2e38fa446240fc"} Nov 28 11:56:22 crc kubenswrapper[5030]: I1128 11:56:22.847719 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zg94c" event={"ID":"a047de37-e5fb-49f1-8b34-94c084894e18","Type":"ContainerStarted","Data":"0f01d798998656bbca6608219d47fce078422a3d09c5b524d42b0645fe3468e8"} Nov 28 11:56:22 crc kubenswrapper[5030]: I1128 11:56:22.847782 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zg94c" event={"ID":"a047de37-e5fb-49f1-8b34-94c084894e18","Type":"ContainerStarted","Data":"0b8e03d4ed3404056dbf54a28955fb918df3fea233cbdd731a4f558f4cb16833"} Nov 28 11:56:22 crc kubenswrapper[5030]: I1128 11:56:22.854612 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"646b709f-223b-4619-aff7-a5e8bcb29d88","Type":"ContainerStarted","Data":"ea032c0b64d0a25f6ed11a740d8254b04d7153d247850e2aa5c739edcbca2ea4"} Nov 28 11:56:22 crc kubenswrapper[5030]: I1128 11:56:22.854683 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"646b709f-223b-4619-aff7-a5e8bcb29d88","Type":"ContainerStarted","Data":"d1147c272419b75ac6417652132dd0e02275ccd2e7de6dcaa335a98df794deb6"} Nov 28 11:56:22 crc kubenswrapper[5030]: E1128 11:56:22.856361 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-n4rms" podUID="b339c3d5-ab2d-4b8f-958c-14a90aa2bd79" Nov 28 11:56:22 crc kubenswrapper[5030]: E1128 11:56:22.856409 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-5c98g" podUID="14544b4d-bde9-4481-abad-20b1d1c14d72" Nov 28 11:56:22 crc kubenswrapper[5030]: E1128 11:56:22.856868 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-lq47d" podUID="e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c" Nov 28 11:56:22 crc kubenswrapper[5030]: E1128 11:56:22.861164 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-spqtx" podUID="b790b1a3-16d7-498a-8f14-36e52122ad9b" Nov 28 11:56:22 crc kubenswrapper[5030]: I1128 11:56:22.863477 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=13.863453148 podStartE2EDuration="13.863453148s" podCreationTimestamp="2025-11-28 11:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:56:22.860171632 +0000 UTC m=+200.801914335" watchObservedRunningTime="2025-11-28 11:56:22.863453148 +0000 UTC m=+200.805195831" Nov 28 11:56:23 crc kubenswrapper[5030]: I1128 11:56:23.863525 5030 generic.go:334] "Generic (PLEG): container finished" podID="9ed8ccfd-087d-4857-be87-9394c446a411" containerID="ba13c20abbe49107cbafb571b2f8f2d1313cc3dcb800e1433f615b49200710be" exitCode=0 Nov 28 11:56:23 crc kubenswrapper[5030]: I1128 11:56:23.863769 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9ed8ccfd-087d-4857-be87-9394c446a411","Type":"ContainerDied","Data":"ba13c20abbe49107cbafb571b2f8f2d1313cc3dcb800e1433f615b49200710be"} Nov 28 11:56:23 crc kubenswrapper[5030]: I1128 11:56:23.865738 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zg94c" event={"ID":"a047de37-e5fb-49f1-8b34-94c084894e18","Type":"ContainerStarted","Data":"7217a5632bb503ad950054f7e2d8fedc9b3dc6660371b9a706c316c18d5f1c72"} Nov 28 11:56:23 crc kubenswrapper[5030]: I1128 11:56:23.883164 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=7.883134014 podStartE2EDuration="7.883134014s" podCreationTimestamp="2025-11-28 11:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:56:22.975379048 +0000 UTC m=+200.917121731" watchObservedRunningTime="2025-11-28 11:56:23.883134014 +0000 UTC m=+201.824876707" Nov 28 11:56:23 crc kubenswrapper[5030]: I1128 11:56:23.914962 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-zg94c" podStartSLOduration=182.914926037 podStartE2EDuration="3m2.914926037s" podCreationTimestamp="2025-11-28 11:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:56:23.898976278 +0000 UTC m=+201.840718961" watchObservedRunningTime="2025-11-28 11:56:23.914926037 +0000 UTC m=+201.856668760" Nov 28 11:56:25 crc kubenswrapper[5030]: I1128 11:56:25.120556 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:56:25 crc kubenswrapper[5030]: I1128 11:56:25.243307 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ed8ccfd-087d-4857-be87-9394c446a411-kube-api-access\") pod \"9ed8ccfd-087d-4857-be87-9394c446a411\" (UID: \"9ed8ccfd-087d-4857-be87-9394c446a411\") " Nov 28 11:56:25 crc kubenswrapper[5030]: I1128 11:56:25.243467 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8ccfd-087d-4857-be87-9394c446a411-kubelet-dir\") pod \"9ed8ccfd-087d-4857-be87-9394c446a411\" (UID: \"9ed8ccfd-087d-4857-be87-9394c446a411\") " Nov 28 11:56:25 crc kubenswrapper[5030]: I1128 11:56:25.243596 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ed8ccfd-087d-4857-be87-9394c446a411-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9ed8ccfd-087d-4857-be87-9394c446a411" (UID: "9ed8ccfd-087d-4857-be87-9394c446a411"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:56:25 crc kubenswrapper[5030]: I1128 11:56:25.243756 5030 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8ccfd-087d-4857-be87-9394c446a411-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:56:25 crc kubenswrapper[5030]: I1128 11:56:25.250510 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed8ccfd-087d-4857-be87-9394c446a411-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9ed8ccfd-087d-4857-be87-9394c446a411" (UID: "9ed8ccfd-087d-4857-be87-9394c446a411"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:56:25 crc kubenswrapper[5030]: I1128 11:56:25.345782 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ed8ccfd-087d-4857-be87-9394c446a411-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 11:56:25 crc kubenswrapper[5030]: I1128 11:56:25.880815 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9ed8ccfd-087d-4857-be87-9394c446a411","Type":"ContainerDied","Data":"2bb07c87289c5677c5c6044662f487e98b096ea2efa0328aca2e38fa446240fc"} Nov 28 11:56:25 crc kubenswrapper[5030]: I1128 11:56:25.881441 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bb07c87289c5677c5c6044662f487e98b096ea2efa0328aca2e38fa446240fc" Nov 28 11:56:25 crc kubenswrapper[5030]: I1128 11:56:25.881180 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 11:56:28 crc kubenswrapper[5030]: I1128 11:56:28.909426 5030 generic.go:334] "Generic (PLEG): container finished" podID="e626568d-b431-46f4-ad61-429b99eec2a9" containerID="89e55122214dcaf2a0c0a9bb74dbcfa4238e00af8112f5d5aff4afe931cbe606" exitCode=0 Nov 28 11:56:28 crc kubenswrapper[5030]: I1128 11:56:28.909516 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c95t" event={"ID":"e626568d-b431-46f4-ad61-429b99eec2a9","Type":"ContainerDied","Data":"89e55122214dcaf2a0c0a9bb74dbcfa4238e00af8112f5d5aff4afe931cbe606"} Nov 28 11:56:29 crc kubenswrapper[5030]: I1128 11:56:29.919743 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c95t" event={"ID":"e626568d-b431-46f4-ad61-429b99eec2a9","Type":"ContainerStarted","Data":"588f96aada3fbbc2ea0a1bac8ded0114644c6b301921933803b703f8ddf2bc37"} Nov 28 11:56:29 crc kubenswrapper[5030]: I1128 11:56:29.950179 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7c95t" podStartSLOduration=2.753862693 podStartE2EDuration="1m2.950146738s" podCreationTimestamp="2025-11-28 11:55:27 +0000 UTC" firstStartedPulling="2025-11-28 11:55:29.142066265 +0000 UTC m=+147.083808938" lastFinishedPulling="2025-11-28 11:56:29.33835028 +0000 UTC m=+207.280092983" observedRunningTime="2025-11-28 11:56:29.943877494 +0000 UTC m=+207.885620177" watchObservedRunningTime="2025-11-28 11:56:29.950146738 +0000 UTC m=+207.891889411" Nov 28 11:56:31 crc kubenswrapper[5030]: I1128 11:56:31.938753 5030 generic.go:334] "Generic (PLEG): container finished" podID="778bafea-1fde-45d3-aa84-612f3cbe06ba" containerID="5e1da0f367f922c602066ceaa3bc406e2f37c5f2ab956dac28e4d8defe0dda49" exitCode=0 Nov 28 11:56:31 crc kubenswrapper[5030]: I1128 11:56:31.938964 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5md7x" event={"ID":"778bafea-1fde-45d3-aa84-612f3cbe06ba","Type":"ContainerDied","Data":"5e1da0f367f922c602066ceaa3bc406e2f37c5f2ab956dac28e4d8defe0dda49"} Nov 28 11:56:32 crc kubenswrapper[5030]: I1128 11:56:32.947000 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5md7x" event={"ID":"778bafea-1fde-45d3-aa84-612f3cbe06ba","Type":"ContainerStarted","Data":"0fa74c640893f21273ad2607fe4babdb3de7fe666947d0dd386cca0d34c74679"} Nov 28 11:56:32 crc kubenswrapper[5030]: I1128 11:56:32.977296 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5md7x" podStartSLOduration=2.764292111 podStartE2EDuration="1m3.977266543s" podCreationTimestamp="2025-11-28 11:55:29 +0000 UTC" firstStartedPulling="2025-11-28 11:55:31.333957454 +0000 UTC m=+149.275700137" lastFinishedPulling="2025-11-28 11:56:32.546931886 +0000 UTC m=+210.488674569" observedRunningTime="2025-11-28 11:56:32.971979685 +0000 UTC m=+210.913722398" watchObservedRunningTime="2025-11-28 11:56:32.977266543 +0000 UTC m=+210.919009266" Nov 28 11:56:33 crc kubenswrapper[5030]: I1128 11:56:33.201580 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:56:33 crc kubenswrapper[5030]: I1128 11:56:33.201661 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:56:33 crc kubenswrapper[5030]: I1128 11:56:33.201726 5030 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 11:56:33 crc kubenswrapper[5030]: I1128 11:56:33.202524 5030 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94"} pod="openshift-machine-config-operator/machine-config-daemon-cqr62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 11:56:33 crc kubenswrapper[5030]: I1128 11:56:33.202642 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" containerID="cri-o://9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94" gracePeriod=600 Nov 28 11:56:33 crc kubenswrapper[5030]: I1128 11:56:33.958172 5030 generic.go:334] "Generic (PLEG): container finished" podID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerID="9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94" exitCode=0 Nov 28 11:56:33 crc kubenswrapper[5030]: I1128 11:56:33.958258 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerDied","Data":"9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94"} Nov 28 11:56:33 crc kubenswrapper[5030]: I1128 11:56:33.962855 5030 generic.go:334] "Generic (PLEG): container finished" podID="948d21f6-477d-4ea8-bc55-e4e061ae2284" containerID="35854aa45e0c448d27371f1d22224713341eb6797a08cab20a3023695b8fde40" exitCode=0 Nov 28 11:56:33 crc kubenswrapper[5030]: I1128 11:56:33.962924 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-shrxp" event={"ID":"948d21f6-477d-4ea8-bc55-e4e061ae2284","Type":"ContainerDied","Data":"35854aa45e0c448d27371f1d22224713341eb6797a08cab20a3023695b8fde40"} Nov 28 11:56:34 crc kubenswrapper[5030]: I1128 11:56:34.977932 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerStarted","Data":"8114faafcf69ecaca67dafc3c5944ffd0ee0fd234807f68465536643254d90e4"} Nov 28 11:56:34 crc kubenswrapper[5030]: I1128 11:56:34.980675 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmlxr" event={"ID":"80dcfad1-67ed-4289-93e7-e5fcbfd3682d","Type":"ContainerStarted","Data":"0573b3561a6e7e029a9594f88e9b57154ef1a4a6d8b099ff9bf0726e27eb22ba"} Nov 28 11:56:34 crc kubenswrapper[5030]: I1128 11:56:34.987122 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-shrxp" event={"ID":"948d21f6-477d-4ea8-bc55-e4e061ae2284","Type":"ContainerStarted","Data":"b68e44c75644e2bdacc694936ead57318ac6dd9e31e2f29a10718ad7b1ee73a4"} Nov 28 11:56:35 crc kubenswrapper[5030]: I1128 11:56:35.035564 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-shrxp" podStartSLOduration=2.679801961 podStartE2EDuration="1m6.035536251s" podCreationTimestamp="2025-11-28 11:55:29 +0000 UTC" firstStartedPulling="2025-11-28 11:55:31.317318671 +0000 UTC m=+149.259061354" lastFinishedPulling="2025-11-28 11:56:34.673052961 +0000 UTC m=+212.614795644" observedRunningTime="2025-11-28 11:56:35.024798311 +0000 UTC m=+212.966541014" watchObservedRunningTime="2025-11-28 11:56:35.035536251 +0000 UTC m=+212.977278944" Nov 28 11:56:35 crc kubenswrapper[5030]: I1128 11:56:35.993540 5030 generic.go:334] "Generic (PLEG): container finished" podID="80dcfad1-67ed-4289-93e7-e5fcbfd3682d" containerID="0573b3561a6e7e029a9594f88e9b57154ef1a4a6d8b099ff9bf0726e27eb22ba" exitCode=0 Nov 28 11:56:35 crc kubenswrapper[5030]: I1128 11:56:35.993620 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmlxr" event={"ID":"80dcfad1-67ed-4289-93e7-e5fcbfd3682d","Type":"ContainerDied","Data":"0573b3561a6e7e029a9594f88e9b57154ef1a4a6d8b099ff9bf0726e27eb22ba"} Nov 28 11:56:37 crc kubenswrapper[5030]: I1128 11:56:37.012109 5030 generic.go:334] "Generic (PLEG): container finished" podID="e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c" containerID="c0e9b74db3e474a4dc1792f01b64bbf34e8e69bfafa383efe39f52bad83a52cb" exitCode=0 Nov 28 11:56:37 crc kubenswrapper[5030]: I1128 11:56:37.012291 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq47d" event={"ID":"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c","Type":"ContainerDied","Data":"c0e9b74db3e474a4dc1792f01b64bbf34e8e69bfafa383efe39f52bad83a52cb"} Nov 28 11:56:37 crc kubenswrapper[5030]: I1128 11:56:37.025418 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmlxr" event={"ID":"80dcfad1-67ed-4289-93e7-e5fcbfd3682d","Type":"ContainerStarted","Data":"36563cbd9708a46a2104d972697d3d98056cf4483120a9e2f18201d54b8c61ea"} Nov 28 11:56:37 crc kubenswrapper[5030]: I1128 11:56:37.418634 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fmlxr" podStartSLOduration=2.779897783 podStartE2EDuration="1m10.418596693s" podCreationTimestamp="2025-11-28 11:55:27 +0000 UTC" firstStartedPulling="2025-11-28 11:55:29.110231641 +0000 UTC m=+147.051974324" lastFinishedPulling="2025-11-28 11:56:36.748930531 +0000 UTC m=+214.690673234" observedRunningTime="2025-11-28 11:56:37.047355224 +0000 UTC m=+214.989097907" watchObservedRunningTime="2025-11-28 11:56:37.418596693 +0000 UTC m=+215.360339406" Nov 28 11:56:37 crc kubenswrapper[5030]: I1128 11:56:37.639641 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7c95t" Nov 28 11:56:37 crc kubenswrapper[5030]: I1128 11:56:37.639741 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7c95t" Nov 28 11:56:37 crc kubenswrapper[5030]: I1128 11:56:37.785019 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7c95t" Nov 28 11:56:37 crc kubenswrapper[5030]: I1128 11:56:37.997219 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fmlxr" Nov 28 11:56:37 crc kubenswrapper[5030]: I1128 11:56:37.997753 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fmlxr" Nov 28 11:56:38 crc kubenswrapper[5030]: I1128 11:56:38.036281 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq47d" event={"ID":"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c","Type":"ContainerStarted","Data":"a58992aa9e0f559a81c3262b99f07d83e5f62bb73fd821b21a26bdf88eaade9e"} Nov 28 11:56:38 crc kubenswrapper[5030]: I1128 11:56:38.062725 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lq47d" podStartSLOduration=2.523398779 podStartE2EDuration="1m11.062704057s" podCreationTimestamp="2025-11-28 11:55:27 +0000 UTC" firstStartedPulling="2025-11-28 11:55:29.135825992 +0000 UTC m=+147.077568675" lastFinishedPulling="2025-11-28 11:56:37.67513126 +0000 UTC m=+215.616873953" observedRunningTime="2025-11-28 11:56:38.058786495 +0000 UTC m=+216.000529178" watchObservedRunningTime="2025-11-28 11:56:38.062704057 +0000 UTC m=+216.004446740" Nov 28 11:56:38 crc kubenswrapper[5030]: I1128 11:56:38.088892 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7c95t" Nov 28 11:56:39 crc kubenswrapper[5030]: I1128 11:56:39.045665 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5c98g" event={"ID":"14544b4d-bde9-4481-abad-20b1d1c14d72","Type":"ContainerStarted","Data":"a39a9df6f581fde6ec9e9b65ce1d7175751a93a735895986be7d9bc499a3cfce"} Nov 28 11:56:39 crc kubenswrapper[5030]: I1128 11:56:39.050155 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4rms" event={"ID":"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79","Type":"ContainerStarted","Data":"519d63679d064bf1c9779e9b7b012582e95389bbf5c840686f543fe6d8463991"} Nov 28 11:56:39 crc kubenswrapper[5030]: I1128 11:56:39.052660 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spqtx" event={"ID":"b790b1a3-16d7-498a-8f14-36e52122ad9b","Type":"ContainerStarted","Data":"d227e530adb92f3bc5ffb7208dffa450879c0f7c00920e026e0d5e92783c493f"} Nov 28 11:56:39 crc kubenswrapper[5030]: I1128 11:56:39.056920 5030 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fmlxr" podUID="80dcfad1-67ed-4289-93e7-e5fcbfd3682d" containerName="registry-server" probeResult="failure" output=< Nov 28 11:56:39 crc kubenswrapper[5030]: timeout: failed to connect service ":50051" within 1s Nov 28 11:56:39 crc kubenswrapper[5030]: > Nov 28 11:56:39 crc kubenswrapper[5030]: I1128 11:56:39.130380 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-456s8"] Nov 28 11:56:39 crc kubenswrapper[5030]: I1128 11:56:39.684782 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5md7x" Nov 28 11:56:39 crc kubenswrapper[5030]: I1128 11:56:39.684878 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5md7x" Nov 28 11:56:39 crc kubenswrapper[5030]: I1128 11:56:39.729231 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5md7x" Nov 28 11:56:39 crc kubenswrapper[5030]: I1128 11:56:39.998353 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-shrxp" Nov 28 11:56:39 crc kubenswrapper[5030]: I1128 11:56:39.998541 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-shrxp" Nov 28 11:56:40 crc kubenswrapper[5030]: I1128 11:56:40.042554 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-shrxp" Nov 28 11:56:40 crc kubenswrapper[5030]: I1128 11:56:40.061313 5030 generic.go:334] "Generic (PLEG): container finished" podID="b339c3d5-ab2d-4b8f-958c-14a90aa2bd79" containerID="519d63679d064bf1c9779e9b7b012582e95389bbf5c840686f543fe6d8463991" exitCode=0 Nov 28 11:56:40 crc kubenswrapper[5030]: I1128 11:56:40.061418 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4rms" event={"ID":"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79","Type":"ContainerDied","Data":"519d63679d064bf1c9779e9b7b012582e95389bbf5c840686f543fe6d8463991"} Nov 28 11:56:40 crc kubenswrapper[5030]: I1128 11:56:40.064299 5030 generic.go:334] "Generic (PLEG): container finished" podID="b790b1a3-16d7-498a-8f14-36e52122ad9b" containerID="d227e530adb92f3bc5ffb7208dffa450879c0f7c00920e026e0d5e92783c493f" exitCode=0 Nov 28 11:56:40 crc kubenswrapper[5030]: I1128 11:56:40.064374 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spqtx" event={"ID":"b790b1a3-16d7-498a-8f14-36e52122ad9b","Type":"ContainerDied","Data":"d227e530adb92f3bc5ffb7208dffa450879c0f7c00920e026e0d5e92783c493f"} Nov 28 11:56:40 crc kubenswrapper[5030]: I1128 11:56:40.070992 5030 generic.go:334] "Generic (PLEG): container finished" podID="14544b4d-bde9-4481-abad-20b1d1c14d72" containerID="a39a9df6f581fde6ec9e9b65ce1d7175751a93a735895986be7d9bc499a3cfce" exitCode=0 Nov 28 11:56:40 crc kubenswrapper[5030]: I1128 11:56:40.071093 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5c98g" event={"ID":"14544b4d-bde9-4481-abad-20b1d1c14d72","Type":"ContainerDied","Data":"a39a9df6f581fde6ec9e9b65ce1d7175751a93a735895986be7d9bc499a3cfce"} Nov 28 11:56:40 crc kubenswrapper[5030]: I1128 11:56:40.148897 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-shrxp" Nov 28 11:56:40 crc kubenswrapper[5030]: I1128 11:56:40.157448 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5md7x" Nov 28 11:56:41 crc kubenswrapper[5030]: I1128 11:56:41.080635 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5c98g" event={"ID":"14544b4d-bde9-4481-abad-20b1d1c14d72","Type":"ContainerStarted","Data":"02aaf8badc85ab7863c8bd19e95176d9db6e2cc9c8599937fb555edc3ab97552"} Nov 28 11:56:41 crc kubenswrapper[5030]: I1128 11:56:41.085231 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4rms" event={"ID":"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79","Type":"ContainerStarted","Data":"fba421790058ede569f8e245d469ee36d8cb3fd2942467b7632893bec5bbe028"} Nov 28 11:56:41 crc kubenswrapper[5030]: I1128 11:56:41.088376 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spqtx" event={"ID":"b790b1a3-16d7-498a-8f14-36e52122ad9b","Type":"ContainerStarted","Data":"4a92fae74d12cd1365125dbca346d6c2698fb9dee32971b166f665c033b3600c"} Nov 28 11:56:41 crc kubenswrapper[5030]: I1128 11:56:41.090445 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n4rms" Nov 28 11:56:41 crc kubenswrapper[5030]: I1128 11:56:41.090584 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n4rms" Nov 28 11:56:41 crc kubenswrapper[5030]: I1128 11:56:41.114636 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5c98g" podStartSLOduration=2.496989023 podStartE2EDuration="1m14.114611251s" podCreationTimestamp="2025-11-28 11:55:27 +0000 UTC" firstStartedPulling="2025-11-28 11:55:29.169220491 +0000 UTC m=+147.110963174" lastFinishedPulling="2025-11-28 11:56:40.786842719 +0000 UTC m=+218.728585402" observedRunningTime="2025-11-28 11:56:41.110995607 +0000 UTC m=+219.052738320" watchObservedRunningTime="2025-11-28 11:56:41.114611251 +0000 UTC m=+219.056353944" Nov 28 11:56:41 crc kubenswrapper[5030]: I1128 11:56:41.147011 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-spqtx" podStartSLOduration=2.8697045 podStartE2EDuration="1m11.146989928s" podCreationTimestamp="2025-11-28 11:55:30 +0000 UTC" firstStartedPulling="2025-11-28 11:55:32.370306576 +0000 UTC m=+150.312049259" lastFinishedPulling="2025-11-28 11:56:40.647591994 +0000 UTC m=+218.589334687" observedRunningTime="2025-11-28 11:56:41.129156201 +0000 UTC m=+219.070898884" watchObservedRunningTime="2025-11-28 11:56:41.146989928 +0000 UTC m=+219.088732611" Nov 28 11:56:41 crc kubenswrapper[5030]: I1128 11:56:41.148957 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n4rms" podStartSLOduration=2.881145638 podStartE2EDuration="1m11.14895021s" podCreationTimestamp="2025-11-28 11:55:30 +0000 UTC" firstStartedPulling="2025-11-28 11:55:32.418244978 +0000 UTC m=+150.359987661" lastFinishedPulling="2025-11-28 11:56:40.68604955 +0000 UTC m=+218.627792233" observedRunningTime="2025-11-28 11:56:41.147361538 +0000 UTC m=+219.089104231" watchObservedRunningTime="2025-11-28 11:56:41.14895021 +0000 UTC m=+219.090692893" Nov 28 11:56:42 crc kubenswrapper[5030]: I1128 11:56:42.133356 5030 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n4rms" podUID="b339c3d5-ab2d-4b8f-958c-14a90aa2bd79" containerName="registry-server" probeResult="failure" output=< Nov 28 11:56:42 crc kubenswrapper[5030]: timeout: failed to connect service ":50051" within 1s Nov 28 11:56:42 crc kubenswrapper[5030]: > Nov 28 11:56:43 crc kubenswrapper[5030]: I1128 11:56:43.840084 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-shrxp"] Nov 28 11:56:43 crc kubenswrapper[5030]: I1128 11:56:43.840572 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-shrxp" podUID="948d21f6-477d-4ea8-bc55-e4e061ae2284" containerName="registry-server" containerID="cri-o://b68e44c75644e2bdacc694936ead57318ac6dd9e31e2f29a10718ad7b1ee73a4" gracePeriod=2 Nov 28 11:56:46 crc kubenswrapper[5030]: I1128 11:56:46.128880 5030 generic.go:334] "Generic (PLEG): container finished" podID="948d21f6-477d-4ea8-bc55-e4e061ae2284" containerID="b68e44c75644e2bdacc694936ead57318ac6dd9e31e2f29a10718ad7b1ee73a4" exitCode=0 Nov 28 11:56:46 crc kubenswrapper[5030]: I1128 11:56:46.128964 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-shrxp" event={"ID":"948d21f6-477d-4ea8-bc55-e4e061ae2284","Type":"ContainerDied","Data":"b68e44c75644e2bdacc694936ead57318ac6dd9e31e2f29a10718ad7b1ee73a4"} Nov 28 11:56:47 crc kubenswrapper[5030]: I1128 11:56:47.424207 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lq47d" Nov 28 11:56:47 crc kubenswrapper[5030]: I1128 11:56:47.424322 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lq47d" Nov 28 11:56:47 crc kubenswrapper[5030]: I1128 11:56:47.504538 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lq47d" Nov 28 11:56:47 crc kubenswrapper[5030]: I1128 11:56:47.872232 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5c98g" Nov 28 11:56:47 crc kubenswrapper[5030]: I1128 11:56:47.873637 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5c98g" Nov 28 11:56:47 crc kubenswrapper[5030]: I1128 11:56:47.926364 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5c98g" Nov 28 11:56:47 crc kubenswrapper[5030]: I1128 11:56:47.956242 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-shrxp" Nov 28 11:56:47 crc kubenswrapper[5030]: I1128 11:56:47.988004 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6r2c\" (UniqueName: \"kubernetes.io/projected/948d21f6-477d-4ea8-bc55-e4e061ae2284-kube-api-access-b6r2c\") pod \"948d21f6-477d-4ea8-bc55-e4e061ae2284\" (UID: \"948d21f6-477d-4ea8-bc55-e4e061ae2284\") " Nov 28 11:56:47 crc kubenswrapper[5030]: I1128 11:56:47.988093 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/948d21f6-477d-4ea8-bc55-e4e061ae2284-catalog-content\") pod \"948d21f6-477d-4ea8-bc55-e4e061ae2284\" (UID: \"948d21f6-477d-4ea8-bc55-e4e061ae2284\") " Nov 28 11:56:47 crc kubenswrapper[5030]: I1128 11:56:47.988284 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/948d21f6-477d-4ea8-bc55-e4e061ae2284-utilities\") pod \"948d21f6-477d-4ea8-bc55-e4e061ae2284\" (UID: \"948d21f6-477d-4ea8-bc55-e4e061ae2284\") " Nov 28 11:56:47 crc kubenswrapper[5030]: I1128 11:56:47.989751 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/948d21f6-477d-4ea8-bc55-e4e061ae2284-utilities" (OuterVolumeSpecName: "utilities") pod "948d21f6-477d-4ea8-bc55-e4e061ae2284" (UID: "948d21f6-477d-4ea8-bc55-e4e061ae2284"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:56:47 crc kubenswrapper[5030]: I1128 11:56:47.997291 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/948d21f6-477d-4ea8-bc55-e4e061ae2284-kube-api-access-b6r2c" (OuterVolumeSpecName: "kube-api-access-b6r2c") pod "948d21f6-477d-4ea8-bc55-e4e061ae2284" (UID: "948d21f6-477d-4ea8-bc55-e4e061ae2284"). InnerVolumeSpecName "kube-api-access-b6r2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:56:48 crc kubenswrapper[5030]: I1128 11:56:48.019292 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/948d21f6-477d-4ea8-bc55-e4e061ae2284-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "948d21f6-477d-4ea8-bc55-e4e061ae2284" (UID: "948d21f6-477d-4ea8-bc55-e4e061ae2284"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:56:48 crc kubenswrapper[5030]: I1128 11:56:48.066666 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fmlxr" Nov 28 11:56:48 crc kubenswrapper[5030]: I1128 11:56:48.093819 5030 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/948d21f6-477d-4ea8-bc55-e4e061ae2284-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:56:48 crc kubenswrapper[5030]: I1128 11:56:48.093883 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6r2c\" (UniqueName: \"kubernetes.io/projected/948d21f6-477d-4ea8-bc55-e4e061ae2284-kube-api-access-b6r2c\") on node \"crc\" DevicePath \"\"" Nov 28 11:56:48 crc kubenswrapper[5030]: I1128 11:56:48.093907 5030 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/948d21f6-477d-4ea8-bc55-e4e061ae2284-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:56:48 crc kubenswrapper[5030]: I1128 11:56:48.110511 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fmlxr" Nov 28 11:56:48 crc kubenswrapper[5030]: I1128 11:56:48.143944 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-shrxp" event={"ID":"948d21f6-477d-4ea8-bc55-e4e061ae2284","Type":"ContainerDied","Data":"86b4de5ac8e120ae3b84d63d35d91aaca6a4e1cb51744f54cc3dc2583cec8366"} Nov 28 11:56:48 crc kubenswrapper[5030]: I1128 11:56:48.144039 5030 scope.go:117] "RemoveContainer" containerID="b68e44c75644e2bdacc694936ead57318ac6dd9e31e2f29a10718ad7b1ee73a4" Nov 28 11:56:48 crc kubenswrapper[5030]: I1128 11:56:48.144223 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-shrxp" Nov 28 11:56:48 crc kubenswrapper[5030]: I1128 11:56:48.179267 5030 scope.go:117] "RemoveContainer" containerID="35854aa45e0c448d27371f1d22224713341eb6797a08cab20a3023695b8fde40" Nov 28 11:56:48 crc kubenswrapper[5030]: I1128 11:56:48.188529 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-shrxp"] Nov 28 11:56:48 crc kubenswrapper[5030]: I1128 11:56:48.191045 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-shrxp"] Nov 28 11:56:48 crc kubenswrapper[5030]: I1128 11:56:48.200986 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lq47d" Nov 28 11:56:48 crc kubenswrapper[5030]: I1128 11:56:48.202542 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5c98g" Nov 28 11:56:48 crc kubenswrapper[5030]: I1128 11:56:48.224873 5030 scope.go:117] "RemoveContainer" containerID="389d5b2f2e05d02eca966da40e9ff0d7107e130e9f54d66fdba1ee283e6c9b2f" Nov 28 11:56:48 crc kubenswrapper[5030]: I1128 11:56:48.402170 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="948d21f6-477d-4ea8-bc55-e4e061ae2284" path="/var/lib/kubelet/pods/948d21f6-477d-4ea8-bc55-e4e061ae2284/volumes" Nov 28 11:56:50 crc kubenswrapper[5030]: I1128 11:56:50.640856 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-spqtx" Nov 28 11:56:50 crc kubenswrapper[5030]: I1128 11:56:50.640931 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-spqtx" Nov 28 11:56:50 crc kubenswrapper[5030]: I1128 11:56:50.645738 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5c98g"] Nov 28 11:56:50 crc kubenswrapper[5030]: I1128 11:56:50.715372 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-spqtx" Nov 28 11:56:51 crc kubenswrapper[5030]: I1128 11:56:51.147461 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n4rms" Nov 28 11:56:51 crc kubenswrapper[5030]: I1128 11:56:51.179105 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5c98g" podUID="14544b4d-bde9-4481-abad-20b1d1c14d72" containerName="registry-server" containerID="cri-o://02aaf8badc85ab7863c8bd19e95176d9db6e2cc9c8599937fb555edc3ab97552" gracePeriod=2 Nov 28 11:56:51 crc kubenswrapper[5030]: I1128 11:56:51.192935 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n4rms" Nov 28 11:56:51 crc kubenswrapper[5030]: I1128 11:56:51.222602 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-spqtx" Nov 28 11:56:51 crc kubenswrapper[5030]: I1128 11:56:51.642107 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fmlxr"] Nov 28 11:56:51 crc kubenswrapper[5030]: I1128 11:56:51.642555 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fmlxr" podUID="80dcfad1-67ed-4289-93e7-e5fcbfd3682d" containerName="registry-server" containerID="cri-o://36563cbd9708a46a2104d972697d3d98056cf4483120a9e2f18201d54b8c61ea" gracePeriod=2 Nov 28 11:56:52 crc kubenswrapper[5030]: I1128 11:56:52.201256 5030 generic.go:334] "Generic (PLEG): container finished" podID="14544b4d-bde9-4481-abad-20b1d1c14d72" containerID="02aaf8badc85ab7863c8bd19e95176d9db6e2cc9c8599937fb555edc3ab97552" exitCode=0 Nov 28 11:56:52 crc kubenswrapper[5030]: I1128 11:56:52.201402 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5c98g" event={"ID":"14544b4d-bde9-4481-abad-20b1d1c14d72","Type":"ContainerDied","Data":"02aaf8badc85ab7863c8bd19e95176d9db6e2cc9c8599937fb555edc3ab97552"} Nov 28 11:56:52 crc kubenswrapper[5030]: I1128 11:56:52.205648 5030 generic.go:334] "Generic (PLEG): container finished" podID="80dcfad1-67ed-4289-93e7-e5fcbfd3682d" containerID="36563cbd9708a46a2104d972697d3d98056cf4483120a9e2f18201d54b8c61ea" exitCode=0 Nov 28 11:56:52 crc kubenswrapper[5030]: I1128 11:56:52.205718 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmlxr" event={"ID":"80dcfad1-67ed-4289-93e7-e5fcbfd3682d","Type":"ContainerDied","Data":"36563cbd9708a46a2104d972697d3d98056cf4483120a9e2f18201d54b8c61ea"} Nov 28 11:56:54 crc kubenswrapper[5030]: I1128 11:56:54.050727 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n4rms"] Nov 28 11:56:54 crc kubenswrapper[5030]: I1128 11:56:54.052903 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n4rms" podUID="b339c3d5-ab2d-4b8f-958c-14a90aa2bd79" containerName="registry-server" containerID="cri-o://fba421790058ede569f8e245d469ee36d8cb3fd2942467b7632893bec5bbe028" gracePeriod=2 Nov 28 11:56:55 crc kubenswrapper[5030]: I1128 11:56:55.960891 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5c98g" Nov 28 11:56:55 crc kubenswrapper[5030]: I1128 11:56:55.967502 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmlxr" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.110240 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mlfc\" (UniqueName: \"kubernetes.io/projected/14544b4d-bde9-4481-abad-20b1d1c14d72-kube-api-access-4mlfc\") pod \"14544b4d-bde9-4481-abad-20b1d1c14d72\" (UID: \"14544b4d-bde9-4481-abad-20b1d1c14d72\") " Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.110447 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz86s\" (UniqueName: \"kubernetes.io/projected/80dcfad1-67ed-4289-93e7-e5fcbfd3682d-kube-api-access-nz86s\") pod \"80dcfad1-67ed-4289-93e7-e5fcbfd3682d\" (UID: \"80dcfad1-67ed-4289-93e7-e5fcbfd3682d\") " Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.110557 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80dcfad1-67ed-4289-93e7-e5fcbfd3682d-utilities\") pod \"80dcfad1-67ed-4289-93e7-e5fcbfd3682d\" (UID: \"80dcfad1-67ed-4289-93e7-e5fcbfd3682d\") " Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.110625 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14544b4d-bde9-4481-abad-20b1d1c14d72-utilities\") pod \"14544b4d-bde9-4481-abad-20b1d1c14d72\" (UID: \"14544b4d-bde9-4481-abad-20b1d1c14d72\") " Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.110674 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80dcfad1-67ed-4289-93e7-e5fcbfd3682d-catalog-content\") pod \"80dcfad1-67ed-4289-93e7-e5fcbfd3682d\" (UID: \"80dcfad1-67ed-4289-93e7-e5fcbfd3682d\") " Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.110725 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14544b4d-bde9-4481-abad-20b1d1c14d72-catalog-content\") pod \"14544b4d-bde9-4481-abad-20b1d1c14d72\" (UID: \"14544b4d-bde9-4481-abad-20b1d1c14d72\") " Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.111701 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14544b4d-bde9-4481-abad-20b1d1c14d72-utilities" (OuterVolumeSpecName: "utilities") pod "14544b4d-bde9-4481-abad-20b1d1c14d72" (UID: "14544b4d-bde9-4481-abad-20b1d1c14d72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.111836 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80dcfad1-67ed-4289-93e7-e5fcbfd3682d-utilities" (OuterVolumeSpecName: "utilities") pod "80dcfad1-67ed-4289-93e7-e5fcbfd3682d" (UID: "80dcfad1-67ed-4289-93e7-e5fcbfd3682d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.118814 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80dcfad1-67ed-4289-93e7-e5fcbfd3682d-kube-api-access-nz86s" (OuterVolumeSpecName: "kube-api-access-nz86s") pod "80dcfad1-67ed-4289-93e7-e5fcbfd3682d" (UID: "80dcfad1-67ed-4289-93e7-e5fcbfd3682d"). InnerVolumeSpecName "kube-api-access-nz86s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.120370 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14544b4d-bde9-4481-abad-20b1d1c14d72-kube-api-access-4mlfc" (OuterVolumeSpecName: "kube-api-access-4mlfc") pod "14544b4d-bde9-4481-abad-20b1d1c14d72" (UID: "14544b4d-bde9-4481-abad-20b1d1c14d72"). InnerVolumeSpecName "kube-api-access-4mlfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.183735 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14544b4d-bde9-4481-abad-20b1d1c14d72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14544b4d-bde9-4481-abad-20b1d1c14d72" (UID: "14544b4d-bde9-4481-abad-20b1d1c14d72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.195531 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80dcfad1-67ed-4289-93e7-e5fcbfd3682d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80dcfad1-67ed-4289-93e7-e5fcbfd3682d" (UID: "80dcfad1-67ed-4289-93e7-e5fcbfd3682d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.217898 5030 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80dcfad1-67ed-4289-93e7-e5fcbfd3682d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.218021 5030 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14544b4d-bde9-4481-abad-20b1d1c14d72-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.218038 5030 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80dcfad1-67ed-4289-93e7-e5fcbfd3682d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.218054 5030 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14544b4d-bde9-4481-abad-20b1d1c14d72-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.218068 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mlfc\" (UniqueName: \"kubernetes.io/projected/14544b4d-bde9-4481-abad-20b1d1c14d72-kube-api-access-4mlfc\") on node \"crc\" DevicePath \"\"" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.218081 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz86s\" (UniqueName: \"kubernetes.io/projected/80dcfad1-67ed-4289-93e7-e5fcbfd3682d-kube-api-access-nz86s\") on node \"crc\" DevicePath \"\"" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.244528 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5c98g" event={"ID":"14544b4d-bde9-4481-abad-20b1d1c14d72","Type":"ContainerDied","Data":"c34d2b87aacc50ccd5b54e2222653aa9292e9a43835a7fa8204f995ac2044c30"} Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.244577 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5c98g" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.244668 5030 scope.go:117] "RemoveContainer" containerID="02aaf8badc85ab7863c8bd19e95176d9db6e2cc9c8599937fb555edc3ab97552" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.247769 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmlxr" event={"ID":"80dcfad1-67ed-4289-93e7-e5fcbfd3682d","Type":"ContainerDied","Data":"84bee1b2f2b1b660cadbfda53e0fe4c1932b35a4e57a0b7b76fcb36f6d83b2c3"} Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.247869 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmlxr" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.249915 5030 generic.go:334] "Generic (PLEG): container finished" podID="b339c3d5-ab2d-4b8f-958c-14a90aa2bd79" containerID="fba421790058ede569f8e245d469ee36d8cb3fd2942467b7632893bec5bbe028" exitCode=0 Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.249966 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4rms" event={"ID":"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79","Type":"ContainerDied","Data":"fba421790058ede569f8e245d469ee36d8cb3fd2942467b7632893bec5bbe028"} Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.263999 5030 scope.go:117] "RemoveContainer" containerID="a39a9df6f581fde6ec9e9b65ce1d7175751a93a735895986be7d9bc499a3cfce" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.281610 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5c98g"] Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.284869 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5c98g"] Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.293009 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fmlxr"] Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.295814 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fmlxr"] Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.311137 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n4rms" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.313299 5030 scope.go:117] "RemoveContainer" containerID="71abfae994803b80ca932a4b7a9b7cb229c45ee5e93ec0956a8f3340520ed085" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.319376 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4x6m\" (UniqueName: \"kubernetes.io/projected/b339c3d5-ab2d-4b8f-958c-14a90aa2bd79-kube-api-access-q4x6m\") pod \"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79\" (UID: \"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79\") " Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.319599 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b339c3d5-ab2d-4b8f-958c-14a90aa2bd79-utilities\") pod \"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79\" (UID: \"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79\") " Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.319668 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b339c3d5-ab2d-4b8f-958c-14a90aa2bd79-catalog-content\") pod \"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79\" (UID: \"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79\") " Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.321023 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b339c3d5-ab2d-4b8f-958c-14a90aa2bd79-utilities" (OuterVolumeSpecName: "utilities") pod "b339c3d5-ab2d-4b8f-958c-14a90aa2bd79" (UID: "b339c3d5-ab2d-4b8f-958c-14a90aa2bd79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.323493 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b339c3d5-ab2d-4b8f-958c-14a90aa2bd79-kube-api-access-q4x6m" (OuterVolumeSpecName: "kube-api-access-q4x6m") pod "b339c3d5-ab2d-4b8f-958c-14a90aa2bd79" (UID: "b339c3d5-ab2d-4b8f-958c-14a90aa2bd79"). InnerVolumeSpecName "kube-api-access-q4x6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.330273 5030 scope.go:117] "RemoveContainer" containerID="36563cbd9708a46a2104d972697d3d98056cf4483120a9e2f18201d54b8c61ea" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.354770 5030 scope.go:117] "RemoveContainer" containerID="0573b3561a6e7e029a9594f88e9b57154ef1a4a6d8b099ff9bf0726e27eb22ba" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.370663 5030 scope.go:117] "RemoveContainer" containerID="c625a62e34ca8cd8494b9134d9dd4a849ad187a433e3718b138f76db4f5f43be" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.399448 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14544b4d-bde9-4481-abad-20b1d1c14d72" path="/var/lib/kubelet/pods/14544b4d-bde9-4481-abad-20b1d1c14d72/volumes" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.400163 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80dcfad1-67ed-4289-93e7-e5fcbfd3682d" path="/var/lib/kubelet/pods/80dcfad1-67ed-4289-93e7-e5fcbfd3682d/volumes" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.417971 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b339c3d5-ab2d-4b8f-958c-14a90aa2bd79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b339c3d5-ab2d-4b8f-958c-14a90aa2bd79" (UID: "b339c3d5-ab2d-4b8f-958c-14a90aa2bd79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.420944 5030 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b339c3d5-ab2d-4b8f-958c-14a90aa2bd79-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.420965 5030 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b339c3d5-ab2d-4b8f-958c-14a90aa2bd79-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:56:56 crc kubenswrapper[5030]: I1128 11:56:56.421597 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4x6m\" (UniqueName: \"kubernetes.io/projected/b339c3d5-ab2d-4b8f-958c-14a90aa2bd79-kube-api-access-q4x6m\") on node \"crc\" DevicePath \"\"" Nov 28 11:56:57 crc kubenswrapper[5030]: I1128 11:56:57.259887 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4rms" event={"ID":"b339c3d5-ab2d-4b8f-958c-14a90aa2bd79","Type":"ContainerDied","Data":"19e9fb2a4c1b696c65144010315e2db9a23bf34d220dcb98a265c58abd1a0c7c"} Nov 28 11:56:57 crc kubenswrapper[5030]: I1128 11:56:57.261052 5030 scope.go:117] "RemoveContainer" containerID="fba421790058ede569f8e245d469ee36d8cb3fd2942467b7632893bec5bbe028" Nov 28 11:56:57 crc kubenswrapper[5030]: I1128 11:56:57.261361 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n4rms" Nov 28 11:56:57 crc kubenswrapper[5030]: I1128 11:56:57.286831 5030 scope.go:117] "RemoveContainer" containerID="519d63679d064bf1c9779e9b7b012582e95389bbf5c840686f543fe6d8463991" Nov 28 11:56:57 crc kubenswrapper[5030]: I1128 11:56:57.317630 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n4rms"] Nov 28 11:56:57 crc kubenswrapper[5030]: I1128 11:56:57.322553 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n4rms"] Nov 28 11:56:57 crc kubenswrapper[5030]: I1128 11:56:57.325653 5030 scope.go:117] "RemoveContainer" containerID="cf1fd3ad49b0181c41184e575c662bc913238193e5bee6f1f11431e4e09683cc" Nov 28 11:56:58 crc kubenswrapper[5030]: I1128 11:56:58.405435 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b339c3d5-ab2d-4b8f-958c-14a90aa2bd79" path="/var/lib/kubelet/pods/b339c3d5-ab2d-4b8f-958c-14a90aa2bd79/volumes" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.553823 5030 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.554814 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b339c3d5-ab2d-4b8f-958c-14a90aa2bd79" containerName="registry-server" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.554856 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="b339c3d5-ab2d-4b8f-958c-14a90aa2bd79" containerName="registry-server" Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.554891 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80dcfad1-67ed-4289-93e7-e5fcbfd3682d" containerName="extract-utilities" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.554909 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dcfad1-67ed-4289-93e7-e5fcbfd3682d" containerName="extract-utilities" Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.554929 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed8ccfd-087d-4857-be87-9394c446a411" containerName="pruner" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.554948 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed8ccfd-087d-4857-be87-9394c446a411" containerName="pruner" Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.554975 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948d21f6-477d-4ea8-bc55-e4e061ae2284" containerName="extract-utilities" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.554991 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="948d21f6-477d-4ea8-bc55-e4e061ae2284" containerName="extract-utilities" Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.555013 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14544b4d-bde9-4481-abad-20b1d1c14d72" containerName="extract-utilities" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.555027 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="14544b4d-bde9-4481-abad-20b1d1c14d72" containerName="extract-utilities" Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.555051 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14544b4d-bde9-4481-abad-20b1d1c14d72" containerName="extract-content" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.555063 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="14544b4d-bde9-4481-abad-20b1d1c14d72" containerName="extract-content" Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.555078 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14544b4d-bde9-4481-abad-20b1d1c14d72" containerName="registry-server" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.555090 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="14544b4d-bde9-4481-abad-20b1d1c14d72" containerName="registry-server" Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.555106 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b339c3d5-ab2d-4b8f-958c-14a90aa2bd79" containerName="extract-utilities" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.555118 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="b339c3d5-ab2d-4b8f-958c-14a90aa2bd79" containerName="extract-utilities" Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.555136 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948d21f6-477d-4ea8-bc55-e4e061ae2284" containerName="registry-server" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.555149 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="948d21f6-477d-4ea8-bc55-e4e061ae2284" containerName="registry-server" Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.555166 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948d21f6-477d-4ea8-bc55-e4e061ae2284" containerName="extract-content" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.555178 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="948d21f6-477d-4ea8-bc55-e4e061ae2284" containerName="extract-content" Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.555194 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b339c3d5-ab2d-4b8f-958c-14a90aa2bd79" containerName="extract-content" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.555205 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="b339c3d5-ab2d-4b8f-958c-14a90aa2bd79" containerName="extract-content" Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.555227 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80dcfad1-67ed-4289-93e7-e5fcbfd3682d" containerName="registry-server" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.555239 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dcfad1-67ed-4289-93e7-e5fcbfd3682d" containerName="registry-server" Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.555258 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80dcfad1-67ed-4289-93e7-e5fcbfd3682d" containerName="extract-content" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.555270 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dcfad1-67ed-4289-93e7-e5fcbfd3682d" containerName="extract-content" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.555523 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="b339c3d5-ab2d-4b8f-958c-14a90aa2bd79" containerName="registry-server" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.555557 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="948d21f6-477d-4ea8-bc55-e4e061ae2284" containerName="registry-server" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.555572 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="80dcfad1-67ed-4289-93e7-e5fcbfd3682d" containerName="registry-server" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.555594 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="14544b4d-bde9-4481-abad-20b1d1c14d72" containerName="registry-server" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.555612 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed8ccfd-087d-4857-be87-9394c446a411" containerName="pruner" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.556227 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.558634 5030 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.559176 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0" gracePeriod=15 Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.559215 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b" gracePeriod=15 Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.559269 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738" gracePeriod=15 Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.559325 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa" gracePeriod=15 Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.559409 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83" gracePeriod=15 Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.561317 5030 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.561550 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.561572 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.561587 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.561598 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.561618 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.561628 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.561638 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.561646 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.561657 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.561665 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.561678 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.561686 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 11:57:00 crc kubenswrapper[5030]: E1128 11:57:00.561709 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.561726 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.561871 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.561885 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.561895 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.561908 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.561924 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.562141 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.685912 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.686020 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.686083 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.686128 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.686168 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.686195 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.686291 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.686337 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.787433 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.787503 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.787559 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.787581 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.787598 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.787578 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.787653 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.787617 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.787681 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.787695 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.787670 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.787701 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.787730 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.787733 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.787670 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:00 crc kubenswrapper[5030]: I1128 11:57:00.787713 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:01 crc kubenswrapper[5030]: I1128 11:57:01.307307 5030 generic.go:334] "Generic (PLEG): container finished" podID="646b709f-223b-4619-aff7-a5e8bcb29d88" containerID="ea032c0b64d0a25f6ed11a740d8254b04d7153d247850e2aa5c739edcbca2ea4" exitCode=0 Nov 28 11:57:01 crc kubenswrapper[5030]: I1128 11:57:01.307463 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"646b709f-223b-4619-aff7-a5e8bcb29d88","Type":"ContainerDied","Data":"ea032c0b64d0a25f6ed11a740d8254b04d7153d247850e2aa5c739edcbca2ea4"} Nov 28 11:57:01 crc kubenswrapper[5030]: I1128 11:57:01.309208 5030 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:01 crc kubenswrapper[5030]: I1128 11:57:01.309763 5030 status_manager.go:851] "Failed to get status for pod" podUID="646b709f-223b-4619-aff7-a5e8bcb29d88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:01 crc kubenswrapper[5030]: I1128 11:57:01.312831 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 11:57:01 crc kubenswrapper[5030]: I1128 11:57:01.315215 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 11:57:01 crc kubenswrapper[5030]: I1128 11:57:01.316622 5030 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b" exitCode=0 Nov 28 11:57:01 crc kubenswrapper[5030]: I1128 11:57:01.316665 5030 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa" exitCode=0 Nov 28 11:57:01 crc kubenswrapper[5030]: I1128 11:57:01.316684 5030 scope.go:117] "RemoveContainer" containerID="8b7e1acb58bbfcff689bfcc7dc8e855cdd91827c02991306689c4fae058cf19b" Nov 28 11:57:01 crc kubenswrapper[5030]: I1128 11:57:01.316689 5030 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738" exitCode=0 Nov 28 11:57:01 crc kubenswrapper[5030]: I1128 11:57:01.316805 5030 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83" exitCode=2 Nov 28 11:57:02 crc kubenswrapper[5030]: I1128 11:57:02.327620 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 11:57:02 crc kubenswrapper[5030]: I1128 11:57:02.396427 5030 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:02 crc kubenswrapper[5030]: I1128 11:57:02.397426 5030 status_manager.go:851] "Failed to get status for pod" podUID="646b709f-223b-4619-aff7-a5e8bcb29d88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:02 crc kubenswrapper[5030]: I1128 11:57:02.715192 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:57:02 crc kubenswrapper[5030]: I1128 11:57:02.716174 5030 status_manager.go:851] "Failed to get status for pod" podUID="646b709f-223b-4619-aff7-a5e8bcb29d88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:02 crc kubenswrapper[5030]: I1128 11:57:02.818623 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/646b709f-223b-4619-aff7-a5e8bcb29d88-var-lock\") pod \"646b709f-223b-4619-aff7-a5e8bcb29d88\" (UID: \"646b709f-223b-4619-aff7-a5e8bcb29d88\") " Nov 28 11:57:02 crc kubenswrapper[5030]: I1128 11:57:02.818786 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/646b709f-223b-4619-aff7-a5e8bcb29d88-kubelet-dir\") pod \"646b709f-223b-4619-aff7-a5e8bcb29d88\" (UID: \"646b709f-223b-4619-aff7-a5e8bcb29d88\") " Nov 28 11:57:02 crc kubenswrapper[5030]: I1128 11:57:02.818783 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/646b709f-223b-4619-aff7-a5e8bcb29d88-var-lock" (OuterVolumeSpecName: "var-lock") pod "646b709f-223b-4619-aff7-a5e8bcb29d88" (UID: "646b709f-223b-4619-aff7-a5e8bcb29d88"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:57:02 crc kubenswrapper[5030]: I1128 11:57:02.818859 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/646b709f-223b-4619-aff7-a5e8bcb29d88-kube-api-access\") pod \"646b709f-223b-4619-aff7-a5e8bcb29d88\" (UID: \"646b709f-223b-4619-aff7-a5e8bcb29d88\") " Nov 28 11:57:02 crc kubenswrapper[5030]: I1128 11:57:02.818907 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/646b709f-223b-4619-aff7-a5e8bcb29d88-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "646b709f-223b-4619-aff7-a5e8bcb29d88" (UID: "646b709f-223b-4619-aff7-a5e8bcb29d88"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:57:02 crc kubenswrapper[5030]: I1128 11:57:02.819142 5030 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/646b709f-223b-4619-aff7-a5e8bcb29d88-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:02 crc kubenswrapper[5030]: I1128 11:57:02.819158 5030 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/646b709f-223b-4619-aff7-a5e8bcb29d88-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:02 crc kubenswrapper[5030]: I1128 11:57:02.827889 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/646b709f-223b-4619-aff7-a5e8bcb29d88-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "646b709f-223b-4619-aff7-a5e8bcb29d88" (UID: "646b709f-223b-4619-aff7-a5e8bcb29d88"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:57:02 crc kubenswrapper[5030]: I1128 11:57:02.920822 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/646b709f-223b-4619-aff7-a5e8bcb29d88-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:02 crc kubenswrapper[5030]: I1128 11:57:02.987955 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 11:57:02 crc kubenswrapper[5030]: I1128 11:57:02.989325 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:02 crc kubenswrapper[5030]: I1128 11:57:02.990227 5030 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:02 crc kubenswrapper[5030]: I1128 11:57:02.990987 5030 status_manager.go:851] "Failed to get status for pod" podUID="646b709f-223b-4619-aff7-a5e8bcb29d88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.123323 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.123457 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.123615 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.123644 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.123666 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.123755 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.124510 5030 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.124569 5030 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.124589 5030 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.342366 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.343498 5030 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0" exitCode=0 Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.343595 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.343600 5030 scope.go:117] "RemoveContainer" containerID="dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.345955 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"646b709f-223b-4619-aff7-a5e8bcb29d88","Type":"ContainerDied","Data":"d1147c272419b75ac6417652132dd0e02275ccd2e7de6dcaa335a98df794deb6"} Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.346010 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1147c272419b75ac6417652132dd0e02275ccd2e7de6dcaa335a98df794deb6" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.346038 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.365906 5030 scope.go:117] "RemoveContainer" containerID="a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.367433 5030 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.368366 5030 status_manager.go:851] "Failed to get status for pod" podUID="646b709f-223b-4619-aff7-a5e8bcb29d88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.375348 5030 status_manager.go:851] "Failed to get status for pod" podUID="646b709f-223b-4619-aff7-a5e8bcb29d88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.375713 5030 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.390262 5030 scope.go:117] "RemoveContainer" containerID="82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.414432 5030 scope.go:117] "RemoveContainer" containerID="9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.435307 5030 scope.go:117] "RemoveContainer" containerID="dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.454894 5030 scope.go:117] "RemoveContainer" containerID="2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.484345 5030 scope.go:117] "RemoveContainer" containerID="dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b" Nov 28 11:57:03 crc kubenswrapper[5030]: E1128 11:57:03.484807 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\": container with ID starting with dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b not found: ID does not exist" containerID="dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.484857 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b"} err="failed to get container status \"dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\": rpc error: code = NotFound desc = could not find container \"dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b\": container with ID starting with dba616c97d4bb2ca22a1260a669e516da121f0cfdc5ea6f384933d790345af8b not found: ID does not exist" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.484887 5030 scope.go:117] "RemoveContainer" containerID="a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa" Nov 28 11:57:03 crc kubenswrapper[5030]: E1128 11:57:03.485363 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\": container with ID starting with a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa not found: ID does not exist" containerID="a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.485426 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa"} err="failed to get container status \"a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\": rpc error: code = NotFound desc = could not find container \"a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa\": container with ID starting with a67e6b967b21772fbfe034d789eac9cf118a70d8e4d0d5726815d79353e274fa not found: ID does not exist" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.485504 5030 scope.go:117] "RemoveContainer" containerID="82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738" Nov 28 11:57:03 crc kubenswrapper[5030]: E1128 11:57:03.486382 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\": container with ID starting with 82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738 not found: ID does not exist" containerID="82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.486420 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738"} err="failed to get container status \"82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\": rpc error: code = NotFound desc = could not find container \"82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738\": container with ID starting with 82ece5c54387f69c55a6ef32d73cb4126c6bf47034079e9ce50c376ce5089738 not found: ID does not exist" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.486440 5030 scope.go:117] "RemoveContainer" containerID="9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83" Nov 28 11:57:03 crc kubenswrapper[5030]: E1128 11:57:03.486989 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\": container with ID starting with 9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83 not found: ID does not exist" containerID="9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.487033 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83"} err="failed to get container status \"9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\": rpc error: code = NotFound desc = could not find container \"9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83\": container with ID starting with 9e3239e9618667204313a418a4bfb68f6a29ef0d1e724f1b67835e2b300ded83 not found: ID does not exist" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.487065 5030 scope.go:117] "RemoveContainer" containerID="dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0" Nov 28 11:57:03 crc kubenswrapper[5030]: E1128 11:57:03.487513 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\": container with ID starting with dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0 not found: ID does not exist" containerID="dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.487549 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0"} err="failed to get container status \"dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\": rpc error: code = NotFound desc = could not find container \"dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0\": container with ID starting with dc0857f52f4e3079e2318997489685e14526a02ca71d389cc48b2a30803025e0 not found: ID does not exist" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.487574 5030 scope.go:117] "RemoveContainer" containerID="2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681" Nov 28 11:57:03 crc kubenswrapper[5030]: E1128 11:57:03.488261 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\": container with ID starting with 2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681 not found: ID does not exist" containerID="2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681" Nov 28 11:57:03 crc kubenswrapper[5030]: I1128 11:57:03.488289 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681"} err="failed to get container status \"2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\": rpc error: code = NotFound desc = could not find container \"2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681\": container with ID starting with 2f1b63b42859704ae5d4574e217e70292a57122bd50e993e0210d7e34455a681 not found: ID does not exist" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.165374 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" podUID="cd9592cc-918c-4863-a561-61372a85c43f" containerName="oauth-openshift" containerID="cri-o://33e13650c65a78fbd483f0bafccc3430deceabc79da53c0ded25ef6126e1ee79" gracePeriod=15 Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.356072 5030 generic.go:334] "Generic (PLEG): container finished" podID="cd9592cc-918c-4863-a561-61372a85c43f" containerID="33e13650c65a78fbd483f0bafccc3430deceabc79da53c0ded25ef6126e1ee79" exitCode=0 Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.356171 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" event={"ID":"cd9592cc-918c-4863-a561-61372a85c43f","Type":"ContainerDied","Data":"33e13650c65a78fbd483f0bafccc3430deceabc79da53c0ded25ef6126e1ee79"} Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.406790 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.590697 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.592142 5030 status_manager.go:851] "Failed to get status for pod" podUID="cd9592cc-918c-4863-a561-61372a85c43f" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-456s8\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.592900 5030 status_manager.go:851] "Failed to get status for pod" podUID="646b709f-223b-4619-aff7-a5e8bcb29d88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.744743 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdbvg\" (UniqueName: \"kubernetes.io/projected/cd9592cc-918c-4863-a561-61372a85c43f-kube-api-access-qdbvg\") pod \"cd9592cc-918c-4863-a561-61372a85c43f\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.744822 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-audit-policies\") pod \"cd9592cc-918c-4863-a561-61372a85c43f\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.744962 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-serving-cert\") pod \"cd9592cc-918c-4863-a561-61372a85c43f\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.745679 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-router-certs\") pod \"cd9592cc-918c-4863-a561-61372a85c43f\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.745775 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-session\") pod \"cd9592cc-918c-4863-a561-61372a85c43f\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.745831 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "cd9592cc-918c-4863-a561-61372a85c43f" (UID: "cd9592cc-918c-4863-a561-61372a85c43f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.745848 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-template-login\") pod \"cd9592cc-918c-4863-a561-61372a85c43f\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.745884 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-service-ca\") pod \"cd9592cc-918c-4863-a561-61372a85c43f\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.745927 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-template-error\") pod \"cd9592cc-918c-4863-a561-61372a85c43f\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.745965 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-idp-0-file-data\") pod \"cd9592cc-918c-4863-a561-61372a85c43f\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.745999 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-cliconfig\") pod \"cd9592cc-918c-4863-a561-61372a85c43f\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.746038 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-ocp-branding-template\") pod \"cd9592cc-918c-4863-a561-61372a85c43f\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.746071 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cd9592cc-918c-4863-a561-61372a85c43f-audit-dir\") pod \"cd9592cc-918c-4863-a561-61372a85c43f\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.746105 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-trusted-ca-bundle\") pod \"cd9592cc-918c-4863-a561-61372a85c43f\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.746147 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-template-provider-selection\") pod \"cd9592cc-918c-4863-a561-61372a85c43f\" (UID: \"cd9592cc-918c-4863-a561-61372a85c43f\") " Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.746730 5030 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.747884 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd9592cc-918c-4863-a561-61372a85c43f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "cd9592cc-918c-4863-a561-61372a85c43f" (UID: "cd9592cc-918c-4863-a561-61372a85c43f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.748075 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "cd9592cc-918c-4863-a561-61372a85c43f" (UID: "cd9592cc-918c-4863-a561-61372a85c43f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.748744 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "cd9592cc-918c-4863-a561-61372a85c43f" (UID: "cd9592cc-918c-4863-a561-61372a85c43f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.749323 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "cd9592cc-918c-4863-a561-61372a85c43f" (UID: "cd9592cc-918c-4863-a561-61372a85c43f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.752841 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "cd9592cc-918c-4863-a561-61372a85c43f" (UID: "cd9592cc-918c-4863-a561-61372a85c43f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.753774 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd9592cc-918c-4863-a561-61372a85c43f-kube-api-access-qdbvg" (OuterVolumeSpecName: "kube-api-access-qdbvg") pod "cd9592cc-918c-4863-a561-61372a85c43f" (UID: "cd9592cc-918c-4863-a561-61372a85c43f"). InnerVolumeSpecName "kube-api-access-qdbvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.753880 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "cd9592cc-918c-4863-a561-61372a85c43f" (UID: "cd9592cc-918c-4863-a561-61372a85c43f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.754403 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "cd9592cc-918c-4863-a561-61372a85c43f" (UID: "cd9592cc-918c-4863-a561-61372a85c43f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.755055 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "cd9592cc-918c-4863-a561-61372a85c43f" (UID: "cd9592cc-918c-4863-a561-61372a85c43f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.755942 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "cd9592cc-918c-4863-a561-61372a85c43f" (UID: "cd9592cc-918c-4863-a561-61372a85c43f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.755967 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "cd9592cc-918c-4863-a561-61372a85c43f" (UID: "cd9592cc-918c-4863-a561-61372a85c43f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.757523 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "cd9592cc-918c-4863-a561-61372a85c43f" (UID: "cd9592cc-918c-4863-a561-61372a85c43f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.766493 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "cd9592cc-918c-4863-a561-61372a85c43f" (UID: "cd9592cc-918c-4863-a561-61372a85c43f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.848747 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.848806 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.848827 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.848850 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.848871 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.848890 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.848909 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.848935 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.848962 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.848988 5030 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cd9592cc-918c-4863-a561-61372a85c43f-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.849010 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.849030 5030 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cd9592cc-918c-4863-a561-61372a85c43f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:04 crc kubenswrapper[5030]: I1128 11:57:04.849050 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdbvg\" (UniqueName: \"kubernetes.io/projected/cd9592cc-918c-4863-a561-61372a85c43f-kube-api-access-qdbvg\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:05 crc kubenswrapper[5030]: I1128 11:57:05.366919 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" event={"ID":"cd9592cc-918c-4863-a561-61372a85c43f","Type":"ContainerDied","Data":"592acbe817814a2c914b5c5f3a312b283d728edc05e9ef312d63e7e53ba2d0b0"} Nov 28 11:57:05 crc kubenswrapper[5030]: I1128 11:57:05.367045 5030 scope.go:117] "RemoveContainer" containerID="33e13650c65a78fbd483f0bafccc3430deceabc79da53c0ded25ef6126e1ee79" Nov 28 11:57:05 crc kubenswrapper[5030]: I1128 11:57:05.368642 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" Nov 28 11:57:05 crc kubenswrapper[5030]: I1128 11:57:05.372034 5030 status_manager.go:851] "Failed to get status for pod" podUID="646b709f-223b-4619-aff7-a5e8bcb29d88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:05 crc kubenswrapper[5030]: I1128 11:57:05.372910 5030 status_manager.go:851] "Failed to get status for pod" podUID="cd9592cc-918c-4863-a561-61372a85c43f" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-456s8\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:05 crc kubenswrapper[5030]: I1128 11:57:05.400200 5030 status_manager.go:851] "Failed to get status for pod" podUID="646b709f-223b-4619-aff7-a5e8bcb29d88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:05 crc kubenswrapper[5030]: I1128 11:57:05.401047 5030 status_manager.go:851] "Failed to get status for pod" podUID="cd9592cc-918c-4863-a561-61372a85c43f" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-456s8\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:05 crc kubenswrapper[5030]: E1128 11:57:05.622440 5030 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:05 crc kubenswrapper[5030]: I1128 11:57:05.623146 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:05 crc kubenswrapper[5030]: W1128 11:57:05.665657 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-c67497f0771e32fca54d918b9d5df4a5aec5d441023a45b38e5310b74d78dda7 WatchSource:0}: Error finding container c67497f0771e32fca54d918b9d5df4a5aec5d441023a45b38e5310b74d78dda7: Status 404 returned error can't find the container with id c67497f0771e32fca54d918b9d5df4a5aec5d441023a45b38e5310b74d78dda7 Nov 28 11:57:05 crc kubenswrapper[5030]: E1128 11:57:05.672323 5030 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c29b853571a95 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 11:57:05.671330453 +0000 UTC m=+243.613073166,LastTimestamp:2025-11-28 11:57:05.671330453 +0000 UTC m=+243.613073166,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 11:57:06 crc kubenswrapper[5030]: I1128 11:57:06.378336 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"81da27f98caed4d24b160271874b44bef1b278779b594faf2ab1234cde946b93"} Nov 28 11:57:06 crc kubenswrapper[5030]: I1128 11:57:06.378440 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c67497f0771e32fca54d918b9d5df4a5aec5d441023a45b38e5310b74d78dda7"} Nov 28 11:57:06 crc kubenswrapper[5030]: E1128 11:57:06.379360 5030 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:06 crc kubenswrapper[5030]: I1128 11:57:06.379728 5030 status_manager.go:851] "Failed to get status for pod" podUID="646b709f-223b-4619-aff7-a5e8bcb29d88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:06 crc kubenswrapper[5030]: I1128 11:57:06.380372 5030 status_manager.go:851] "Failed to get status for pod" podUID="cd9592cc-918c-4863-a561-61372a85c43f" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-456s8\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:08 crc kubenswrapper[5030]: E1128 11:57:08.048258 5030 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:08 crc kubenswrapper[5030]: E1128 11:57:08.049557 5030 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:08 crc kubenswrapper[5030]: E1128 11:57:08.050303 5030 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:08 crc kubenswrapper[5030]: E1128 11:57:08.050880 5030 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:08 crc kubenswrapper[5030]: E1128 11:57:08.051350 5030 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:08 crc kubenswrapper[5030]: I1128 11:57:08.051456 5030 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 28 11:57:08 crc kubenswrapper[5030]: E1128 11:57:08.051972 5030 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="200ms" Nov 28 11:57:08 crc kubenswrapper[5030]: E1128 11:57:08.252962 5030 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="400ms" Nov 28 11:57:08 crc kubenswrapper[5030]: E1128 11:57:08.654427 5030 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="800ms" Nov 28 11:57:09 crc kubenswrapper[5030]: E1128 11:57:09.456144 5030 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="1.6s" Nov 28 11:57:11 crc kubenswrapper[5030]: E1128 11:57:11.056817 5030 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="3.2s" Nov 28 11:57:12 crc kubenswrapper[5030]: E1128 11:57:12.191902 5030 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c29b853571a95 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 11:57:05.671330453 +0000 UTC m=+243.613073166,LastTimestamp:2025-11-28 11:57:05.671330453 +0000 UTC m=+243.613073166,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 11:57:12 crc kubenswrapper[5030]: I1128 11:57:12.392755 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:12 crc kubenswrapper[5030]: I1128 11:57:12.401532 5030 status_manager.go:851] "Failed to get status for pod" podUID="646b709f-223b-4619-aff7-a5e8bcb29d88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:12 crc kubenswrapper[5030]: I1128 11:57:12.402254 5030 status_manager.go:851] "Failed to get status for pod" podUID="cd9592cc-918c-4863-a561-61372a85c43f" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-456s8\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:12 crc kubenswrapper[5030]: I1128 11:57:12.403217 5030 status_manager.go:851] "Failed to get status for pod" podUID="646b709f-223b-4619-aff7-a5e8bcb29d88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:12 crc kubenswrapper[5030]: I1128 11:57:12.403779 5030 status_manager.go:851] "Failed to get status for pod" podUID="cd9592cc-918c-4863-a561-61372a85c43f" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-456s8\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:12 crc kubenswrapper[5030]: I1128 11:57:12.422646 5030 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7a36cb8a-5a38-4da0-938c-fafe93f48886" Nov 28 11:57:12 crc kubenswrapper[5030]: I1128 11:57:12.422689 5030 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7a36cb8a-5a38-4da0-938c-fafe93f48886" Nov 28 11:57:12 crc kubenswrapper[5030]: E1128 11:57:12.423167 5030 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:12 crc kubenswrapper[5030]: I1128 11:57:12.423937 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:13 crc kubenswrapper[5030]: I1128 11:57:13.433592 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 11:57:13 crc kubenswrapper[5030]: I1128 11:57:13.433691 5030 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570" exitCode=1 Nov 28 11:57:13 crc kubenswrapper[5030]: I1128 11:57:13.433811 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570"} Nov 28 11:57:13 crc kubenswrapper[5030]: I1128 11:57:13.434649 5030 scope.go:117] "RemoveContainer" containerID="21c77c6422d4e9a3e735a8542d47aa64c67d375cdcfa7664498118d10a240570" Nov 28 11:57:13 crc kubenswrapper[5030]: I1128 11:57:13.434938 5030 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:13 crc kubenswrapper[5030]: I1128 11:57:13.435074 5030 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="ac46be6e35de0291673c67c009a84f09b252414b57bb8493de4c54f1d54490da" exitCode=0 Nov 28 11:57:13 crc kubenswrapper[5030]: I1128 11:57:13.435116 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"ac46be6e35de0291673c67c009a84f09b252414b57bb8493de4c54f1d54490da"} Nov 28 11:57:13 crc kubenswrapper[5030]: I1128 11:57:13.435158 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9001c0c13f9ccf7b1adae65b18994fe149b293ec43a45794ec298c1be5edb0a6"} Nov 28 11:57:13 crc kubenswrapper[5030]: I1128 11:57:13.435398 5030 status_manager.go:851] "Failed to get status for pod" podUID="646b709f-223b-4619-aff7-a5e8bcb29d88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:13 crc kubenswrapper[5030]: I1128 11:57:13.435759 5030 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7a36cb8a-5a38-4da0-938c-fafe93f48886" Nov 28 11:57:13 crc kubenswrapper[5030]: I1128 11:57:13.435788 5030 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7a36cb8a-5a38-4da0-938c-fafe93f48886" Nov 28 11:57:13 crc kubenswrapper[5030]: I1128 11:57:13.435976 5030 status_manager.go:851] "Failed to get status for pod" podUID="cd9592cc-918c-4863-a561-61372a85c43f" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-456s8\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:13 crc kubenswrapper[5030]: E1128 11:57:13.436656 5030 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:13 crc kubenswrapper[5030]: I1128 11:57:13.436651 5030 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:13 crc kubenswrapper[5030]: I1128 11:57:13.437235 5030 status_manager.go:851] "Failed to get status for pod" podUID="646b709f-223b-4619-aff7-a5e8bcb29d88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:13 crc kubenswrapper[5030]: I1128 11:57:13.437705 5030 status_manager.go:851] "Failed to get status for pod" podUID="cd9592cc-918c-4863-a561-61372a85c43f" pod="openshift-authentication/oauth-openshift-558db77b4-456s8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-456s8\": dial tcp 38.102.83.110:6443: connect: connection refused" Nov 28 11:57:14 crc kubenswrapper[5030]: I1128 11:57:14.497508 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 11:57:14 crc kubenswrapper[5030]: I1128 11:57:14.497989 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3f99f300593c7fbdb3637d02544da403f5dad41f1b1a6d6428150558e0199b1c"} Nov 28 11:57:14 crc kubenswrapper[5030]: I1128 11:57:14.504050 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fc2c3c8f0e0856cb48bf08d8edb6a66faff5223c2de87f7bc5ee76e294b94009"} Nov 28 11:57:14 crc kubenswrapper[5030]: I1128 11:57:14.504081 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b98504f4e78ab98847289a211adbea42115274530d0b1178e93fed1e88681bf3"} Nov 28 11:57:14 crc kubenswrapper[5030]: I1128 11:57:14.504092 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f8673638563aeec3b9c88f2ef8035c88c07dd4453c3aa35192e72a77588e452b"} Nov 28 11:57:15 crc kubenswrapper[5030]: I1128 11:57:15.513356 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2225d44762992ce576de17cb760a50bc0b2235485f30af5fce2956035e7fd18f"} Nov 28 11:57:15 crc kubenswrapper[5030]: I1128 11:57:15.513402 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f6f86463494f2a4063ecb5378d821d516e2fa974bf68a83d8bc022bc1fda90b2"} Nov 28 11:57:15 crc kubenswrapper[5030]: I1128 11:57:15.513667 5030 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7a36cb8a-5a38-4da0-938c-fafe93f48886" Nov 28 11:57:15 crc kubenswrapper[5030]: I1128 11:57:15.513681 5030 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7a36cb8a-5a38-4da0-938c-fafe93f48886" Nov 28 11:57:15 crc kubenswrapper[5030]: I1128 11:57:15.513999 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:17 crc kubenswrapper[5030]: I1128 11:57:17.424318 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:17 crc kubenswrapper[5030]: I1128 11:57:17.425791 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:17 crc kubenswrapper[5030]: I1128 11:57:17.431298 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:17 crc kubenswrapper[5030]: I1128 11:57:17.743762 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:57:20 crc kubenswrapper[5030]: I1128 11:57:20.319859 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:57:20 crc kubenswrapper[5030]: I1128 11:57:20.320237 5030 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 28 11:57:20 crc kubenswrapper[5030]: I1128 11:57:20.320321 5030 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 28 11:57:20 crc kubenswrapper[5030]: I1128 11:57:20.569908 5030 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:21 crc kubenswrapper[5030]: I1128 11:57:21.561584 5030 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7a36cb8a-5a38-4da0-938c-fafe93f48886" Nov 28 11:57:21 crc kubenswrapper[5030]: I1128 11:57:21.561647 5030 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7a36cb8a-5a38-4da0-938c-fafe93f48886" Nov 28 11:57:21 crc kubenswrapper[5030]: I1128 11:57:21.571331 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:22 crc kubenswrapper[5030]: I1128 11:57:22.404212 5030 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="bc2a39fc-5eab-413f-8a1f-ad701b7cb033" Nov 28 11:57:22 crc kubenswrapper[5030]: I1128 11:57:22.567376 5030 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7a36cb8a-5a38-4da0-938c-fafe93f48886" Nov 28 11:57:22 crc kubenswrapper[5030]: I1128 11:57:22.567421 5030 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7a36cb8a-5a38-4da0-938c-fafe93f48886" Nov 28 11:57:22 crc kubenswrapper[5030]: I1128 11:57:22.571925 5030 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="bc2a39fc-5eab-413f-8a1f-ad701b7cb033" Nov 28 11:57:30 crc kubenswrapper[5030]: I1128 11:57:30.320714 5030 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 28 11:57:30 crc kubenswrapper[5030]: I1128 11:57:30.321699 5030 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 28 11:57:30 crc kubenswrapper[5030]: I1128 11:57:30.991382 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 11:57:32 crc kubenswrapper[5030]: I1128 11:57:32.005425 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 11:57:32 crc kubenswrapper[5030]: I1128 11:57:32.270655 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 28 11:57:32 crc kubenswrapper[5030]: I1128 11:57:32.282006 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 11:57:32 crc kubenswrapper[5030]: I1128 11:57:32.728424 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 11:57:32 crc kubenswrapper[5030]: I1128 11:57:32.780100 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 11:57:32 crc kubenswrapper[5030]: I1128 11:57:32.866747 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 11:57:32 crc kubenswrapper[5030]: I1128 11:57:32.974432 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 11:57:32 crc kubenswrapper[5030]: I1128 11:57:32.986372 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 11:57:33 crc kubenswrapper[5030]: I1128 11:57:33.123090 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 11:57:33 crc kubenswrapper[5030]: I1128 11:57:33.150097 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 11:57:33 crc kubenswrapper[5030]: I1128 11:57:33.420761 5030 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 28 11:57:33 crc kubenswrapper[5030]: I1128 11:57:33.428157 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-456s8"] Nov 28 11:57:33 crc kubenswrapper[5030]: I1128 11:57:33.428251 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 11:57:33 crc kubenswrapper[5030]: I1128 11:57:33.434247 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 11:57:33 crc kubenswrapper[5030]: I1128 11:57:33.435415 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 11:57:33 crc kubenswrapper[5030]: I1128 11:57:33.458701 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.458675633 podStartE2EDuration="13.458675633s" podCreationTimestamp="2025-11-28 11:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:57:33.454936972 +0000 UTC m=+271.396679695" watchObservedRunningTime="2025-11-28 11:57:33.458675633 +0000 UTC m=+271.400418346" Nov 28 11:57:33 crc kubenswrapper[5030]: I1128 11:57:33.627541 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 11:57:33 crc kubenswrapper[5030]: I1128 11:57:33.737294 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 11:57:33 crc kubenswrapper[5030]: I1128 11:57:33.913021 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 11:57:33 crc kubenswrapper[5030]: I1128 11:57:33.920179 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 11:57:34 crc kubenswrapper[5030]: I1128 11:57:34.048354 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 11:57:34 crc kubenswrapper[5030]: I1128 11:57:34.109279 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 11:57:34 crc kubenswrapper[5030]: I1128 11:57:34.411758 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd9592cc-918c-4863-a561-61372a85c43f" path="/var/lib/kubelet/pods/cd9592cc-918c-4863-a561-61372a85c43f/volumes" Nov 28 11:57:34 crc kubenswrapper[5030]: I1128 11:57:34.470703 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 11:57:34 crc kubenswrapper[5030]: I1128 11:57:34.480145 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 28 11:57:34 crc kubenswrapper[5030]: I1128 11:57:34.510034 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 11:57:34 crc kubenswrapper[5030]: I1128 11:57:34.577432 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 28 11:57:34 crc kubenswrapper[5030]: I1128 11:57:34.606695 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 11:57:34 crc kubenswrapper[5030]: I1128 11:57:34.736676 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 11:57:34 crc kubenswrapper[5030]: I1128 11:57:34.854511 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 11:57:34 crc kubenswrapper[5030]: I1128 11:57:34.956679 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 11:57:34 crc kubenswrapper[5030]: I1128 11:57:34.980027 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 28 11:57:35 crc kubenswrapper[5030]: I1128 11:57:35.013398 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 11:57:35 crc kubenswrapper[5030]: I1128 11:57:35.036915 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 11:57:35 crc kubenswrapper[5030]: I1128 11:57:35.073200 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 11:57:35 crc kubenswrapper[5030]: I1128 11:57:35.080957 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 28 11:57:35 crc kubenswrapper[5030]: I1128 11:57:35.121087 5030 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 28 11:57:35 crc kubenswrapper[5030]: I1128 11:57:35.164765 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 11:57:35 crc kubenswrapper[5030]: I1128 11:57:35.316079 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 11:57:35 crc kubenswrapper[5030]: I1128 11:57:35.398876 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 11:57:35 crc kubenswrapper[5030]: I1128 11:57:35.417430 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 11:57:35 crc kubenswrapper[5030]: I1128 11:57:35.471902 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 11:57:35 crc kubenswrapper[5030]: I1128 11:57:35.479993 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 11:57:35 crc kubenswrapper[5030]: I1128 11:57:35.640411 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 11:57:35 crc kubenswrapper[5030]: I1128 11:57:35.657540 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 11:57:35 crc kubenswrapper[5030]: I1128 11:57:35.668099 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 11:57:35 crc kubenswrapper[5030]: I1128 11:57:35.702171 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 11:57:35 crc kubenswrapper[5030]: I1128 11:57:35.806524 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 11:57:35 crc kubenswrapper[5030]: I1128 11:57:35.810928 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 28 11:57:35 crc kubenswrapper[5030]: I1128 11:57:35.989445 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 11:57:36 crc kubenswrapper[5030]: I1128 11:57:36.037418 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 11:57:36 crc kubenswrapper[5030]: I1128 11:57:36.112365 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 11:57:36 crc kubenswrapper[5030]: I1128 11:57:36.128346 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 11:57:36 crc kubenswrapper[5030]: I1128 11:57:36.132342 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 11:57:36 crc kubenswrapper[5030]: I1128 11:57:36.215317 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 28 11:57:36 crc kubenswrapper[5030]: I1128 11:57:36.364403 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 11:57:36 crc kubenswrapper[5030]: I1128 11:57:36.384777 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 28 11:57:36 crc kubenswrapper[5030]: I1128 11:57:36.426940 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 11:57:36 crc kubenswrapper[5030]: I1128 11:57:36.449118 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 11:57:36 crc kubenswrapper[5030]: I1128 11:57:36.487398 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 11:57:36 crc kubenswrapper[5030]: I1128 11:57:36.561533 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 11:57:36 crc kubenswrapper[5030]: I1128 11:57:36.676395 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 11:57:36 crc kubenswrapper[5030]: I1128 11:57:36.696841 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 11:57:36 crc kubenswrapper[5030]: I1128 11:57:36.720574 5030 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 28 11:57:36 crc kubenswrapper[5030]: I1128 11:57:36.844492 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 11:57:36 crc kubenswrapper[5030]: I1128 11:57:36.957007 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 11:57:37 crc kubenswrapper[5030]: I1128 11:57:37.049706 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 28 11:57:37 crc kubenswrapper[5030]: I1128 11:57:37.058879 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 11:57:37 crc kubenswrapper[5030]: I1128 11:57:37.064042 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 11:57:37 crc kubenswrapper[5030]: I1128 11:57:37.360617 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 28 11:57:37 crc kubenswrapper[5030]: I1128 11:57:37.460444 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 11:57:37 crc kubenswrapper[5030]: I1128 11:57:37.550611 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 11:57:37 crc kubenswrapper[5030]: I1128 11:57:37.633664 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 11:57:37 crc kubenswrapper[5030]: I1128 11:57:37.721160 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 11:57:37 crc kubenswrapper[5030]: I1128 11:57:37.761598 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 11:57:37 crc kubenswrapper[5030]: I1128 11:57:37.764308 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 11:57:37 crc kubenswrapper[5030]: I1128 11:57:37.780993 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 11:57:37 crc kubenswrapper[5030]: I1128 11:57:37.800153 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 28 11:57:37 crc kubenswrapper[5030]: I1128 11:57:37.806355 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 11:57:37 crc kubenswrapper[5030]: I1128 11:57:37.807545 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 11:57:37 crc kubenswrapper[5030]: I1128 11:57:37.860005 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 28 11:57:37 crc kubenswrapper[5030]: I1128 11:57:37.870311 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 28 11:57:37 crc kubenswrapper[5030]: I1128 11:57:37.900836 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 11:57:37 crc kubenswrapper[5030]: I1128 11:57:37.917346 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.031994 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.044362 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.187648 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.195235 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.291573 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.298991 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.303265 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.336576 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.350276 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.356559 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.398986 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.401961 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.415510 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.539174 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.559279 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.562088 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.594500 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.631002 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.719243 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.811090 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.825823 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.880460 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.910997 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 11:57:38 crc kubenswrapper[5030]: I1128 11:57:38.976715 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 28 11:57:39 crc kubenswrapper[5030]: I1128 11:57:39.030924 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 11:57:39 crc kubenswrapper[5030]: I1128 11:57:39.078781 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 11:57:39 crc kubenswrapper[5030]: I1128 11:57:39.192571 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 11:57:39 crc kubenswrapper[5030]: I1128 11:57:39.210988 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 11:57:39 crc kubenswrapper[5030]: I1128 11:57:39.242293 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 28 11:57:39 crc kubenswrapper[5030]: I1128 11:57:39.311830 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 11:57:39 crc kubenswrapper[5030]: I1128 11:57:39.347806 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 11:57:39 crc kubenswrapper[5030]: I1128 11:57:39.421623 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 11:57:39 crc kubenswrapper[5030]: I1128 11:57:39.453461 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 11:57:39 crc kubenswrapper[5030]: I1128 11:57:39.458620 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 28 11:57:39 crc kubenswrapper[5030]: I1128 11:57:39.577409 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 28 11:57:39 crc kubenswrapper[5030]: I1128 11:57:39.638100 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 11:57:39 crc kubenswrapper[5030]: I1128 11:57:39.674996 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 11:57:39 crc kubenswrapper[5030]: I1128 11:57:39.754714 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.008682 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.009855 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.032423 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.101261 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.113576 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.123869 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.174359 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.213942 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.217582 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.234836 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.258638 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.277711 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.326981 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.334828 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.496454 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.506360 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.596319 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.647625 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.746548 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.754861 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.808102 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.866499 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.947119 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 11:57:40 crc kubenswrapper[5030]: I1128 11:57:40.960241 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.048255 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.053968 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.108411 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.123092 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.131322 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.158411 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.172079 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.227111 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.235973 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5d4df5b879-xjncl"] Nov 28 11:57:41 crc kubenswrapper[5030]: E1128 11:57:41.236211 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646b709f-223b-4619-aff7-a5e8bcb29d88" containerName="installer" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.236231 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="646b709f-223b-4619-aff7-a5e8bcb29d88" containerName="installer" Nov 28 11:57:41 crc kubenswrapper[5030]: E1128 11:57:41.236242 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9592cc-918c-4863-a561-61372a85c43f" containerName="oauth-openshift" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.236252 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9592cc-918c-4863-a561-61372a85c43f" containerName="oauth-openshift" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.236374 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9592cc-918c-4863-a561-61372a85c43f" containerName="oauth-openshift" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.236489 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="646b709f-223b-4619-aff7-a5e8bcb29d88" containerName="installer" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.237029 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.244809 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.245237 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.245874 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.246341 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.246568 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.246901 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.247749 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.247867 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.248099 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.248735 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.248853 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.249561 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.257711 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.258242 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5d4df5b879-xjncl"] Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.259956 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.279740 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.285738 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.305612 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.305672 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-audit-policies\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.305722 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-audit-dir\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.305790 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-user-template-login\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.305817 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j99p9\" (UniqueName: \"kubernetes.io/projected/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-kube-api-access-j99p9\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.305877 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.305904 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.305969 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.306004 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-session\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.306058 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-user-template-error\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.306085 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.306126 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.306150 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.306176 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.402725 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.407508 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-audit-dir\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.407569 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-user-template-login\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.407594 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j99p9\" (UniqueName: \"kubernetes.io/projected/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-kube-api-access-j99p9\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.407620 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.407652 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.407688 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.407731 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-session\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.407756 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-user-template-error\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.407783 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.407807 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.407828 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.407852 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.407875 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.407894 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-audit-policies\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.408725 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-audit-policies\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.408783 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-audit-dir\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.415160 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.415762 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.416821 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-user-template-error\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.416943 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.417334 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.417695 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-user-template-login\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.418323 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.420739 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.421050 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.423819 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.424203 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-v4-0-config-system-session\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.435220 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j99p9\" (UniqueName: \"kubernetes.io/projected/e9dcc150-f158-4eee-89b5-f57e7cd5bf47-kube-api-access-j99p9\") pod \"oauth-openshift-5d4df5b879-xjncl\" (UID: \"e9dcc150-f158-4eee-89b5-f57e7cd5bf47\") " pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.582532 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.640762 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.775106 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.779717 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.913599 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.939813 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.951195 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.972747 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 11:57:41 crc kubenswrapper[5030]: I1128 11:57:41.993854 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.018868 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.121665 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.216393 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.376786 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.387990 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.423508 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.561498 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.572069 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.615041 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.645164 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.649741 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.723948 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.724446 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.797429 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.806178 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.850350 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.879942 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.930795 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.977860 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 11:57:42 crc kubenswrapper[5030]: I1128 11:57:42.981940 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 11:57:43 crc kubenswrapper[5030]: I1128 11:57:43.002603 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 11:57:43 crc kubenswrapper[5030]: I1128 11:57:43.237870 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 11:57:43 crc kubenswrapper[5030]: I1128 11:57:43.256059 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 28 11:57:43 crc kubenswrapper[5030]: I1128 11:57:43.295366 5030 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 11:57:43 crc kubenswrapper[5030]: I1128 11:57:43.295727 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://81da27f98caed4d24b160271874b44bef1b278779b594faf2ab1234cde946b93" gracePeriod=5 Nov 28 11:57:43 crc kubenswrapper[5030]: I1128 11:57:43.348624 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 28 11:57:43 crc kubenswrapper[5030]: I1128 11:57:43.354669 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 11:57:43 crc kubenswrapper[5030]: I1128 11:57:43.387074 5030 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 28 11:57:43 crc kubenswrapper[5030]: I1128 11:57:43.412567 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 11:57:43 crc kubenswrapper[5030]: I1128 11:57:43.422162 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 11:57:43 crc kubenswrapper[5030]: I1128 11:57:43.554690 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 11:57:43 crc kubenswrapper[5030]: I1128 11:57:43.613457 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 11:57:43 crc kubenswrapper[5030]: I1128 11:57:43.614750 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 11:57:43 crc kubenswrapper[5030]: I1128 11:57:43.802610 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 11:57:43 crc kubenswrapper[5030]: I1128 11:57:43.808518 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 28 11:57:43 crc kubenswrapper[5030]: I1128 11:57:43.835992 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 11:57:43 crc kubenswrapper[5030]: I1128 11:57:43.842959 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.001155 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.011314 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.024406 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.048920 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.069209 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.069579 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.072289 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.162419 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.228818 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5d4df5b879-xjncl"] Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.231255 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.266402 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.291651 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.358287 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.388822 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.420567 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.475587 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.489320 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.504210 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.586082 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.587422 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.636916 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.758010 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" event={"ID":"e9dcc150-f158-4eee-89b5-f57e7cd5bf47","Type":"ContainerStarted","Data":"990681a38ef5b914a6de01429494b4c01f952c06f29920b861016f374b111c83"} Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.758062 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" event={"ID":"e9dcc150-f158-4eee-89b5-f57e7cd5bf47","Type":"ContainerStarted","Data":"7dd86676968005eac4fc21f3a822cf8826705fdbdce686858bbdde68d1fe4bbf"} Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.759645 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.793833 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" podStartSLOduration=65.793807897 podStartE2EDuration="1m5.793807897s" podCreationTimestamp="2025-11-28 11:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:57:44.786019765 +0000 UTC m=+282.727762448" watchObservedRunningTime="2025-11-28 11:57:44.793807897 +0000 UTC m=+282.735550620" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.909504 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 11:57:44 crc kubenswrapper[5030]: I1128 11:57:44.936034 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 28 11:57:45 crc kubenswrapper[5030]: I1128 11:57:45.095068 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5d4df5b879-xjncl" Nov 28 11:57:45 crc kubenswrapper[5030]: I1128 11:57:45.260061 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 11:57:45 crc kubenswrapper[5030]: I1128 11:57:45.321630 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 11:57:45 crc kubenswrapper[5030]: I1128 11:57:45.335492 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 28 11:57:45 crc kubenswrapper[5030]: I1128 11:57:45.397139 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 11:57:45 crc kubenswrapper[5030]: I1128 11:57:45.565156 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 28 11:57:45 crc kubenswrapper[5030]: I1128 11:57:45.620605 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 11:57:45 crc kubenswrapper[5030]: I1128 11:57:45.632039 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 11:57:45 crc kubenswrapper[5030]: I1128 11:57:45.836300 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 11:57:45 crc kubenswrapper[5030]: I1128 11:57:45.980831 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 11:57:46 crc kubenswrapper[5030]: I1128 11:57:46.125297 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 28 11:57:46 crc kubenswrapper[5030]: I1128 11:57:46.177997 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 11:57:46 crc kubenswrapper[5030]: I1128 11:57:46.812169 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 11:57:46 crc kubenswrapper[5030]: I1128 11:57:46.949173 5030 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 28 11:57:47 crc kubenswrapper[5030]: I1128 11:57:47.010261 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 28 11:57:47 crc kubenswrapper[5030]: I1128 11:57:47.153720 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 28 11:57:47 crc kubenswrapper[5030]: I1128 11:57:47.214761 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 11:57:47 crc kubenswrapper[5030]: I1128 11:57:47.643326 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.066023 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.156050 5030 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.525961 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.728348 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.785034 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.785104 5030 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="81da27f98caed4d24b160271874b44bef1b278779b594faf2ab1234cde946b93" exitCode=137 Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.897182 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.897639 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.921887 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.921980 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.922094 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.922176 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.922266 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.922281 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.922382 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.922768 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.922691 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.923225 5030 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.923267 5030 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.923291 5030 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.923317 5030 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.936049 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 11:57:48 crc kubenswrapper[5030]: I1128 11:57:48.959776 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 28 11:57:49 crc kubenswrapper[5030]: I1128 11:57:49.026273 5030 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 11:57:49 crc kubenswrapper[5030]: I1128 11:57:49.793858 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 11:57:49 crc kubenswrapper[5030]: I1128 11:57:49.793971 5030 scope.go:117] "RemoveContainer" containerID="81da27f98caed4d24b160271874b44bef1b278779b594faf2ab1234cde946b93" Nov 28 11:57:49 crc kubenswrapper[5030]: I1128 11:57:49.794029 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 11:57:50 crc kubenswrapper[5030]: I1128 11:57:50.405836 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.395076 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qv6pd"] Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.396045 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" podUID="49559462-c755-4be6-8277-c8cc20aeb0e0" containerName="controller-manager" containerID="cri-o://df1c3bf5c0da889f964e3956ba7bc9825aede060c68b9dcfec47b626f214729b" gracePeriod=30 Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.522612 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8"] Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.522859 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" podUID="4310c2c4-4ad9-4820-abc9-09f761fa3a71" containerName="route-controller-manager" containerID="cri-o://3f56bf7efba32d22f1009ade22f4266b4a203e97f7ea55a6336983d5aa98391c" gracePeriod=30 Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.810021 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.877847 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49559462-c755-4be6-8277-c8cc20aeb0e0-serving-cert\") pod \"49559462-c755-4be6-8277-c8cc20aeb0e0\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.877929 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49559462-c755-4be6-8277-c8cc20aeb0e0-config\") pod \"49559462-c755-4be6-8277-c8cc20aeb0e0\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.877982 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49559462-c755-4be6-8277-c8cc20aeb0e0-proxy-ca-bundles\") pod \"49559462-c755-4be6-8277-c8cc20aeb0e0\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.878010 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49559462-c755-4be6-8277-c8cc20aeb0e0-client-ca\") pod \"49559462-c755-4be6-8277-c8cc20aeb0e0\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.878096 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66gnk\" (UniqueName: \"kubernetes.io/projected/49559462-c755-4be6-8277-c8cc20aeb0e0-kube-api-access-66gnk\") pod \"49559462-c755-4be6-8277-c8cc20aeb0e0\" (UID: \"49559462-c755-4be6-8277-c8cc20aeb0e0\") " Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.879983 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49559462-c755-4be6-8277-c8cc20aeb0e0-client-ca" (OuterVolumeSpecName: "client-ca") pod "49559462-c755-4be6-8277-c8cc20aeb0e0" (UID: "49559462-c755-4be6-8277-c8cc20aeb0e0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.880029 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49559462-c755-4be6-8277-c8cc20aeb0e0-config" (OuterVolumeSpecName: "config") pod "49559462-c755-4be6-8277-c8cc20aeb0e0" (UID: "49559462-c755-4be6-8277-c8cc20aeb0e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.880658 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49559462-c755-4be6-8277-c8cc20aeb0e0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "49559462-c755-4be6-8277-c8cc20aeb0e0" (UID: "49559462-c755-4be6-8277-c8cc20aeb0e0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.884927 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49559462-c755-4be6-8277-c8cc20aeb0e0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "49559462-c755-4be6-8277-c8cc20aeb0e0" (UID: "49559462-c755-4be6-8277-c8cc20aeb0e0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.885856 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49559462-c755-4be6-8277-c8cc20aeb0e0-kube-api-access-66gnk" (OuterVolumeSpecName: "kube-api-access-66gnk") pod "49559462-c755-4be6-8277-c8cc20aeb0e0" (UID: "49559462-c755-4be6-8277-c8cc20aeb0e0"). InnerVolumeSpecName "kube-api-access-66gnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.903744 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.943875 5030 generic.go:334] "Generic (PLEG): container finished" podID="49559462-c755-4be6-8277-c8cc20aeb0e0" containerID="df1c3bf5c0da889f964e3956ba7bc9825aede060c68b9dcfec47b626f214729b" exitCode=0 Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.943935 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.943951 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" event={"ID":"49559462-c755-4be6-8277-c8cc20aeb0e0","Type":"ContainerDied","Data":"df1c3bf5c0da889f964e3956ba7bc9825aede060c68b9dcfec47b626f214729b"} Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.943986 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qv6pd" event={"ID":"49559462-c755-4be6-8277-c8cc20aeb0e0","Type":"ContainerDied","Data":"0d16d752196f4427bca0f48b922ffac81a5f58c9f76fb480331a0d9c4a63ea48"} Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.944007 5030 scope.go:117] "RemoveContainer" containerID="df1c3bf5c0da889f964e3956ba7bc9825aede060c68b9dcfec47b626f214729b" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.946200 5030 generic.go:334] "Generic (PLEG): container finished" podID="4310c2c4-4ad9-4820-abc9-09f761fa3a71" containerID="3f56bf7efba32d22f1009ade22f4266b4a203e97f7ea55a6336983d5aa98391c" exitCode=0 Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.946230 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" event={"ID":"4310c2c4-4ad9-4820-abc9-09f761fa3a71","Type":"ContainerDied","Data":"3f56bf7efba32d22f1009ade22f4266b4a203e97f7ea55a6336983d5aa98391c"} Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.946250 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" event={"ID":"4310c2c4-4ad9-4820-abc9-09f761fa3a71","Type":"ContainerDied","Data":"1f4a7abeb5be2a94b3e6ef13dddd7bebc9225aeb2256eb4da034d6f48dd4502a"} Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.946290 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.961422 5030 scope.go:117] "RemoveContainer" containerID="df1c3bf5c0da889f964e3956ba7bc9825aede060c68b9dcfec47b626f214729b" Nov 28 11:58:09 crc kubenswrapper[5030]: E1128 11:58:09.961874 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df1c3bf5c0da889f964e3956ba7bc9825aede060c68b9dcfec47b626f214729b\": container with ID starting with df1c3bf5c0da889f964e3956ba7bc9825aede060c68b9dcfec47b626f214729b not found: ID does not exist" containerID="df1c3bf5c0da889f964e3956ba7bc9825aede060c68b9dcfec47b626f214729b" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.961941 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df1c3bf5c0da889f964e3956ba7bc9825aede060c68b9dcfec47b626f214729b"} err="failed to get container status \"df1c3bf5c0da889f964e3956ba7bc9825aede060c68b9dcfec47b626f214729b\": rpc error: code = NotFound desc = could not find container \"df1c3bf5c0da889f964e3956ba7bc9825aede060c68b9dcfec47b626f214729b\": container with ID starting with df1c3bf5c0da889f964e3956ba7bc9825aede060c68b9dcfec47b626f214729b not found: ID does not exist" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.961964 5030 scope.go:117] "RemoveContainer" containerID="3f56bf7efba32d22f1009ade22f4266b4a203e97f7ea55a6336983d5aa98391c" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.979878 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4310c2c4-4ad9-4820-abc9-09f761fa3a71-serving-cert\") pod \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\" (UID: \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\") " Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.979939 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrq25\" (UniqueName: \"kubernetes.io/projected/4310c2c4-4ad9-4820-abc9-09f761fa3a71-kube-api-access-xrq25\") pod \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\" (UID: \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\") " Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.980023 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4310c2c4-4ad9-4820-abc9-09f761fa3a71-client-ca\") pod \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\" (UID: \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\") " Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.980075 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4310c2c4-4ad9-4820-abc9-09f761fa3a71-config\") pod \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\" (UID: \"4310c2c4-4ad9-4820-abc9-09f761fa3a71\") " Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.980410 5030 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49559462-c755-4be6-8277-c8cc20aeb0e0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.980423 5030 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49559462-c755-4be6-8277-c8cc20aeb0e0-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.980431 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66gnk\" (UniqueName: \"kubernetes.io/projected/49559462-c755-4be6-8277-c8cc20aeb0e0-kube-api-access-66gnk\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.980459 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49559462-c755-4be6-8277-c8cc20aeb0e0-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.980504 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49559462-c755-4be6-8277-c8cc20aeb0e0-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.981626 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4310c2c4-4ad9-4820-abc9-09f761fa3a71-config" (OuterVolumeSpecName: "config") pod "4310c2c4-4ad9-4820-abc9-09f761fa3a71" (UID: "4310c2c4-4ad9-4820-abc9-09f761fa3a71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.981660 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4310c2c4-4ad9-4820-abc9-09f761fa3a71-client-ca" (OuterVolumeSpecName: "client-ca") pod "4310c2c4-4ad9-4820-abc9-09f761fa3a71" (UID: "4310c2c4-4ad9-4820-abc9-09f761fa3a71"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.983816 5030 scope.go:117] "RemoveContainer" containerID="3f56bf7efba32d22f1009ade22f4266b4a203e97f7ea55a6336983d5aa98391c" Nov 28 11:58:09 crc kubenswrapper[5030]: E1128 11:58:09.984648 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f56bf7efba32d22f1009ade22f4266b4a203e97f7ea55a6336983d5aa98391c\": container with ID starting with 3f56bf7efba32d22f1009ade22f4266b4a203e97f7ea55a6336983d5aa98391c not found: ID does not exist" containerID="3f56bf7efba32d22f1009ade22f4266b4a203e97f7ea55a6336983d5aa98391c" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.984694 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f56bf7efba32d22f1009ade22f4266b4a203e97f7ea55a6336983d5aa98391c"} err="failed to get container status \"3f56bf7efba32d22f1009ade22f4266b4a203e97f7ea55a6336983d5aa98391c\": rpc error: code = NotFound desc = could not find container \"3f56bf7efba32d22f1009ade22f4266b4a203e97f7ea55a6336983d5aa98391c\": container with ID starting with 3f56bf7efba32d22f1009ade22f4266b4a203e97f7ea55a6336983d5aa98391c not found: ID does not exist" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.984948 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4310c2c4-4ad9-4820-abc9-09f761fa3a71-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4310c2c4-4ad9-4820-abc9-09f761fa3a71" (UID: "4310c2c4-4ad9-4820-abc9-09f761fa3a71"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.985202 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4310c2c4-4ad9-4820-abc9-09f761fa3a71-kube-api-access-xrq25" (OuterVolumeSpecName: "kube-api-access-xrq25") pod "4310c2c4-4ad9-4820-abc9-09f761fa3a71" (UID: "4310c2c4-4ad9-4820-abc9-09f761fa3a71"). InnerVolumeSpecName "kube-api-access-xrq25". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.990754 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qv6pd"] Nov 28 11:58:09 crc kubenswrapper[5030]: I1128 11:58:09.995193 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qv6pd"] Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.081821 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4310c2c4-4ad9-4820-abc9-09f761fa3a71-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.081857 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrq25\" (UniqueName: \"kubernetes.io/projected/4310c2c4-4ad9-4820-abc9-09f761fa3a71-kube-api-access-xrq25\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.081872 5030 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4310c2c4-4ad9-4820-abc9-09f761fa3a71-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.081883 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4310c2c4-4ad9-4820-abc9-09f761fa3a71-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.276690 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8"] Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.280227 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-77js8"] Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.401889 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4310c2c4-4ad9-4820-abc9-09f761fa3a71" path="/var/lib/kubelet/pods/4310c2c4-4ad9-4820-abc9-09f761fa3a71/volumes" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.403161 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49559462-c755-4be6-8277-c8cc20aeb0e0" path="/var/lib/kubelet/pods/49559462-c755-4be6-8277-c8cc20aeb0e0/volumes" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.672612 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f"] Nov 28 11:58:10 crc kubenswrapper[5030]: E1128 11:58:10.674500 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49559462-c755-4be6-8277-c8cc20aeb0e0" containerName="controller-manager" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.674526 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="49559462-c755-4be6-8277-c8cc20aeb0e0" containerName="controller-manager" Nov 28 11:58:10 crc kubenswrapper[5030]: E1128 11:58:10.674579 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4310c2c4-4ad9-4820-abc9-09f761fa3a71" containerName="route-controller-manager" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.674593 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="4310c2c4-4ad9-4820-abc9-09f761fa3a71" containerName="route-controller-manager" Nov 28 11:58:10 crc kubenswrapper[5030]: E1128 11:58:10.674631 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.674642 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.674809 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="49559462-c755-4be6-8277-c8cc20aeb0e0" containerName="controller-manager" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.674843 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="4310c2c4-4ad9-4820-abc9-09f761fa3a71" containerName="route-controller-manager" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.674856 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.677283 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cdf944ff-vn72b"] Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.686075 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f"] Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.689772 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cdf944ff-vn72b"] Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.678227 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.699343 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.703744 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.704049 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.706907 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.707228 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.707388 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.707582 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.707729 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.710531 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.710763 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.710910 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.711110 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.711700 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.718089 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.791998 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57326abf-1247-4fc6-8679-1272a0a3119b-config\") pod \"route-controller-manager-6bb4f5f444-mbz2f\" (UID: \"57326abf-1247-4fc6-8679-1272a0a3119b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.792069 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95a7afe1-52e0-41f3-934b-0eae308769e9-client-ca\") pod \"controller-manager-6cdf944ff-vn72b\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.792103 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wf79\" (UniqueName: \"kubernetes.io/projected/95a7afe1-52e0-41f3-934b-0eae308769e9-kube-api-access-8wf79\") pod \"controller-manager-6cdf944ff-vn72b\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.792238 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95a7afe1-52e0-41f3-934b-0eae308769e9-config\") pod \"controller-manager-6cdf944ff-vn72b\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.792289 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95a7afe1-52e0-41f3-934b-0eae308769e9-serving-cert\") pod \"controller-manager-6cdf944ff-vn72b\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.792316 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57326abf-1247-4fc6-8679-1272a0a3119b-client-ca\") pod \"route-controller-manager-6bb4f5f444-mbz2f\" (UID: \"57326abf-1247-4fc6-8679-1272a0a3119b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.792358 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57326abf-1247-4fc6-8679-1272a0a3119b-serving-cert\") pod \"route-controller-manager-6bb4f5f444-mbz2f\" (UID: \"57326abf-1247-4fc6-8679-1272a0a3119b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.792386 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95a7afe1-52e0-41f3-934b-0eae308769e9-proxy-ca-bundles\") pod \"controller-manager-6cdf944ff-vn72b\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.792426 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp7w8\" (UniqueName: \"kubernetes.io/projected/57326abf-1247-4fc6-8679-1272a0a3119b-kube-api-access-tp7w8\") pod \"route-controller-manager-6bb4f5f444-mbz2f\" (UID: \"57326abf-1247-4fc6-8679-1272a0a3119b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.894707 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57326abf-1247-4fc6-8679-1272a0a3119b-config\") pod \"route-controller-manager-6bb4f5f444-mbz2f\" (UID: \"57326abf-1247-4fc6-8679-1272a0a3119b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.894818 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95a7afe1-52e0-41f3-934b-0eae308769e9-client-ca\") pod \"controller-manager-6cdf944ff-vn72b\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.894872 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wf79\" (UniqueName: \"kubernetes.io/projected/95a7afe1-52e0-41f3-934b-0eae308769e9-kube-api-access-8wf79\") pod \"controller-manager-6cdf944ff-vn72b\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.894967 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95a7afe1-52e0-41f3-934b-0eae308769e9-config\") pod \"controller-manager-6cdf944ff-vn72b\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.895018 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95a7afe1-52e0-41f3-934b-0eae308769e9-serving-cert\") pod \"controller-manager-6cdf944ff-vn72b\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.895058 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57326abf-1247-4fc6-8679-1272a0a3119b-client-ca\") pod \"route-controller-manager-6bb4f5f444-mbz2f\" (UID: \"57326abf-1247-4fc6-8679-1272a0a3119b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.895113 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57326abf-1247-4fc6-8679-1272a0a3119b-serving-cert\") pod \"route-controller-manager-6bb4f5f444-mbz2f\" (UID: \"57326abf-1247-4fc6-8679-1272a0a3119b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.895158 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95a7afe1-52e0-41f3-934b-0eae308769e9-proxy-ca-bundles\") pod \"controller-manager-6cdf944ff-vn72b\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.895218 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp7w8\" (UniqueName: \"kubernetes.io/projected/57326abf-1247-4fc6-8679-1272a0a3119b-kube-api-access-tp7w8\") pod \"route-controller-manager-6bb4f5f444-mbz2f\" (UID: \"57326abf-1247-4fc6-8679-1272a0a3119b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.897224 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57326abf-1247-4fc6-8679-1272a0a3119b-config\") pod \"route-controller-manager-6bb4f5f444-mbz2f\" (UID: \"57326abf-1247-4fc6-8679-1272a0a3119b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.897730 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95a7afe1-52e0-41f3-934b-0eae308769e9-client-ca\") pod \"controller-manager-6cdf944ff-vn72b\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.898998 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57326abf-1247-4fc6-8679-1272a0a3119b-client-ca\") pod \"route-controller-manager-6bb4f5f444-mbz2f\" (UID: \"57326abf-1247-4fc6-8679-1272a0a3119b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.901165 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95a7afe1-52e0-41f3-934b-0eae308769e9-proxy-ca-bundles\") pod \"controller-manager-6cdf944ff-vn72b\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.902008 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95a7afe1-52e0-41f3-934b-0eae308769e9-config\") pod \"controller-manager-6cdf944ff-vn72b\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.903089 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57326abf-1247-4fc6-8679-1272a0a3119b-serving-cert\") pod \"route-controller-manager-6bb4f5f444-mbz2f\" (UID: \"57326abf-1247-4fc6-8679-1272a0a3119b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.907807 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95a7afe1-52e0-41f3-934b-0eae308769e9-serving-cert\") pod \"controller-manager-6cdf944ff-vn72b\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.925218 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wf79\" (UniqueName: \"kubernetes.io/projected/95a7afe1-52e0-41f3-934b-0eae308769e9-kube-api-access-8wf79\") pod \"controller-manager-6cdf944ff-vn72b\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:10 crc kubenswrapper[5030]: I1128 11:58:10.938111 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp7w8\" (UniqueName: \"kubernetes.io/projected/57326abf-1247-4fc6-8679-1272a0a3119b-kube-api-access-tp7w8\") pod \"route-controller-manager-6bb4f5f444-mbz2f\" (UID: \"57326abf-1247-4fc6-8679-1272a0a3119b\") " pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" Nov 28 11:58:11 crc kubenswrapper[5030]: I1128 11:58:11.030106 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:11 crc kubenswrapper[5030]: I1128 11:58:11.042730 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" Nov 28 11:58:11 crc kubenswrapper[5030]: I1128 11:58:11.541411 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cdf944ff-vn72b"] Nov 28 11:58:11 crc kubenswrapper[5030]: I1128 11:58:11.576187 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f"] Nov 28 11:58:11 crc kubenswrapper[5030]: W1128 11:58:11.590011 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57326abf_1247_4fc6_8679_1272a0a3119b.slice/crio-db66247e173b675c30ce7011be8ef0006cb474c95943835a3fa94a1037dd0a31 WatchSource:0}: Error finding container db66247e173b675c30ce7011be8ef0006cb474c95943835a3fa94a1037dd0a31: Status 404 returned error can't find the container with id db66247e173b675c30ce7011be8ef0006cb474c95943835a3fa94a1037dd0a31 Nov 28 11:58:11 crc kubenswrapper[5030]: I1128 11:58:11.963145 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" event={"ID":"95a7afe1-52e0-41f3-934b-0eae308769e9","Type":"ContainerStarted","Data":"9cfdc6055b623acadb18a505c656c979bb891904ca6816de6fddceca7c882c2d"} Nov 28 11:58:11 crc kubenswrapper[5030]: I1128 11:58:11.963701 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" event={"ID":"95a7afe1-52e0-41f3-934b-0eae308769e9","Type":"ContainerStarted","Data":"6bb67ffc66e80d33f09aa7439dd69ba229f1f1a35c7abe763eb9ef219148af28"} Nov 28 11:58:11 crc kubenswrapper[5030]: I1128 11:58:11.963851 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:11 crc kubenswrapper[5030]: I1128 11:58:11.966197 5030 patch_prober.go:28] interesting pod/controller-manager-6cdf944ff-vn72b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Nov 28 11:58:11 crc kubenswrapper[5030]: I1128 11:58:11.966294 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" podUID="95a7afe1-52e0-41f3-934b-0eae308769e9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Nov 28 11:58:11 crc kubenswrapper[5030]: I1128 11:58:11.975618 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" event={"ID":"57326abf-1247-4fc6-8679-1272a0a3119b","Type":"ContainerStarted","Data":"4171f7639d4dc8b57ae9d528d462e5247eae23edfae0f749d3ac300c0c77cc0d"} Nov 28 11:58:11 crc kubenswrapper[5030]: I1128 11:58:11.975666 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" event={"ID":"57326abf-1247-4fc6-8679-1272a0a3119b","Type":"ContainerStarted","Data":"db66247e173b675c30ce7011be8ef0006cb474c95943835a3fa94a1037dd0a31"} Nov 28 11:58:11 crc kubenswrapper[5030]: I1128 11:58:11.976180 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" Nov 28 11:58:11 crc kubenswrapper[5030]: I1128 11:58:11.977129 5030 patch_prober.go:28] interesting pod/route-controller-manager-6bb4f5f444-mbz2f container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Nov 28 11:58:11 crc kubenswrapper[5030]: I1128 11:58:11.977208 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" podUID="57326abf-1247-4fc6-8679-1272a0a3119b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Nov 28 11:58:11 crc kubenswrapper[5030]: I1128 11:58:11.987847 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" podStartSLOduration=2.987828829 podStartE2EDuration="2.987828829s" podCreationTimestamp="2025-11-28 11:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:58:11.986860153 +0000 UTC m=+309.928602846" watchObservedRunningTime="2025-11-28 11:58:11.987828829 +0000 UTC m=+309.929571512" Nov 28 11:58:12 crc kubenswrapper[5030]: I1128 11:58:12.020191 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" podStartSLOduration=3.020166685 podStartE2EDuration="3.020166685s" podCreationTimestamp="2025-11-28 11:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:58:12.015941381 +0000 UTC m=+309.957684064" watchObservedRunningTime="2025-11-28 11:58:12.020166685 +0000 UTC m=+309.961909368" Nov 28 11:58:12 crc kubenswrapper[5030]: I1128 11:58:12.986194 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" Nov 28 11:58:12 crc kubenswrapper[5030]: I1128 11:58:12.986811 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:13 crc kubenswrapper[5030]: I1128 11:58:13.413889 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cdf944ff-vn72b"] Nov 28 11:58:13 crc kubenswrapper[5030]: I1128 11:58:13.436052 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f"] Nov 28 11:58:14 crc kubenswrapper[5030]: I1128 11:58:14.992532 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" podUID="57326abf-1247-4fc6-8679-1272a0a3119b" containerName="route-controller-manager" containerID="cri-o://4171f7639d4dc8b57ae9d528d462e5247eae23edfae0f749d3ac300c0c77cc0d" gracePeriod=30 Nov 28 11:58:14 crc kubenswrapper[5030]: I1128 11:58:14.992752 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" podUID="95a7afe1-52e0-41f3-934b-0eae308769e9" containerName="controller-manager" containerID="cri-o://9cfdc6055b623acadb18a505c656c979bb891904ca6816de6fddceca7c882c2d" gracePeriod=30 Nov 28 11:58:15 crc kubenswrapper[5030]: I1128 11:58:15.983760 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" Nov 28 11:58:15 crc kubenswrapper[5030]: I1128 11:58:15.998325 5030 generic.go:334] "Generic (PLEG): container finished" podID="95a7afe1-52e0-41f3-934b-0eae308769e9" containerID="9cfdc6055b623acadb18a505c656c979bb891904ca6816de6fddceca7c882c2d" exitCode=0 Nov 28 11:58:15 crc kubenswrapper[5030]: I1128 11:58:15.998432 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" event={"ID":"95a7afe1-52e0-41f3-934b-0eae308769e9","Type":"ContainerDied","Data":"9cfdc6055b623acadb18a505c656c979bb891904ca6816de6fddceca7c882c2d"} Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.000026 5030 generic.go:334] "Generic (PLEG): container finished" podID="57326abf-1247-4fc6-8679-1272a0a3119b" containerID="4171f7639d4dc8b57ae9d528d462e5247eae23edfae0f749d3ac300c0c77cc0d" exitCode=0 Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.000069 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" event={"ID":"57326abf-1247-4fc6-8679-1272a0a3119b","Type":"ContainerDied","Data":"4171f7639d4dc8b57ae9d528d462e5247eae23edfae0f749d3ac300c0c77cc0d"} Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.000097 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" event={"ID":"57326abf-1247-4fc6-8679-1272a0a3119b","Type":"ContainerDied","Data":"db66247e173b675c30ce7011be8ef0006cb474c95943835a3fa94a1037dd0a31"} Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.000112 5030 scope.go:117] "RemoveContainer" containerID="4171f7639d4dc8b57ae9d528d462e5247eae23edfae0f749d3ac300c0c77cc0d" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.000191 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.020933 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8"] Nov 28 11:58:16 crc kubenswrapper[5030]: E1128 11:58:16.021254 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57326abf-1247-4fc6-8679-1272a0a3119b" containerName="route-controller-manager" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.021723 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="57326abf-1247-4fc6-8679-1272a0a3119b" containerName="route-controller-manager" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.021971 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="57326abf-1247-4fc6-8679-1272a0a3119b" containerName="route-controller-manager" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.022565 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.035057 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8"] Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.041102 5030 scope.go:117] "RemoveContainer" containerID="4171f7639d4dc8b57ae9d528d462e5247eae23edfae0f749d3ac300c0c77cc0d" Nov 28 11:58:16 crc kubenswrapper[5030]: E1128 11:58:16.045259 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4171f7639d4dc8b57ae9d528d462e5247eae23edfae0f749d3ac300c0c77cc0d\": container with ID starting with 4171f7639d4dc8b57ae9d528d462e5247eae23edfae0f749d3ac300c0c77cc0d not found: ID does not exist" containerID="4171f7639d4dc8b57ae9d528d462e5247eae23edfae0f749d3ac300c0c77cc0d" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.045307 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4171f7639d4dc8b57ae9d528d462e5247eae23edfae0f749d3ac300c0c77cc0d"} err="failed to get container status \"4171f7639d4dc8b57ae9d528d462e5247eae23edfae0f749d3ac300c0c77cc0d\": rpc error: code = NotFound desc = could not find container \"4171f7639d4dc8b57ae9d528d462e5247eae23edfae0f749d3ac300c0c77cc0d\": container with ID starting with 4171f7639d4dc8b57ae9d528d462e5247eae23edfae0f749d3ac300c0c77cc0d not found: ID does not exist" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.046124 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.072622 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp7w8\" (UniqueName: \"kubernetes.io/projected/57326abf-1247-4fc6-8679-1272a0a3119b-kube-api-access-tp7w8\") pod \"57326abf-1247-4fc6-8679-1272a0a3119b\" (UID: \"57326abf-1247-4fc6-8679-1272a0a3119b\") " Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.072677 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95a7afe1-52e0-41f3-934b-0eae308769e9-serving-cert\") pod \"95a7afe1-52e0-41f3-934b-0eae308769e9\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.072721 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95a7afe1-52e0-41f3-934b-0eae308769e9-config\") pod \"95a7afe1-52e0-41f3-934b-0eae308769e9\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.072757 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57326abf-1247-4fc6-8679-1272a0a3119b-serving-cert\") pod \"57326abf-1247-4fc6-8679-1272a0a3119b\" (UID: \"57326abf-1247-4fc6-8679-1272a0a3119b\") " Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.072792 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95a7afe1-52e0-41f3-934b-0eae308769e9-client-ca\") pod \"95a7afe1-52e0-41f3-934b-0eae308769e9\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.072821 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57326abf-1247-4fc6-8679-1272a0a3119b-config\") pod \"57326abf-1247-4fc6-8679-1272a0a3119b\" (UID: \"57326abf-1247-4fc6-8679-1272a0a3119b\") " Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.072848 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57326abf-1247-4fc6-8679-1272a0a3119b-client-ca\") pod \"57326abf-1247-4fc6-8679-1272a0a3119b\" (UID: \"57326abf-1247-4fc6-8679-1272a0a3119b\") " Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.072869 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95a7afe1-52e0-41f3-934b-0eae308769e9-proxy-ca-bundles\") pod \"95a7afe1-52e0-41f3-934b-0eae308769e9\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.072890 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wf79\" (UniqueName: \"kubernetes.io/projected/95a7afe1-52e0-41f3-934b-0eae308769e9-kube-api-access-8wf79\") pod \"95a7afe1-52e0-41f3-934b-0eae308769e9\" (UID: \"95a7afe1-52e0-41f3-934b-0eae308769e9\") " Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.073001 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad0f587c-5158-472e-bbd9-ae49e51be1a7-client-ca\") pod \"route-controller-manager-7489674f54-7phd8\" (UID: \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.073028 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6ktr\" (UniqueName: \"kubernetes.io/projected/ad0f587c-5158-472e-bbd9-ae49e51be1a7-kube-api-access-k6ktr\") pod \"route-controller-manager-7489674f54-7phd8\" (UID: \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.073074 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad0f587c-5158-472e-bbd9-ae49e51be1a7-config\") pod \"route-controller-manager-7489674f54-7phd8\" (UID: \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.073099 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad0f587c-5158-472e-bbd9-ae49e51be1a7-serving-cert\") pod \"route-controller-manager-7489674f54-7phd8\" (UID: \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.074412 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57326abf-1247-4fc6-8679-1272a0a3119b-client-ca" (OuterVolumeSpecName: "client-ca") pod "57326abf-1247-4fc6-8679-1272a0a3119b" (UID: "57326abf-1247-4fc6-8679-1272a0a3119b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.081416 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57326abf-1247-4fc6-8679-1272a0a3119b-kube-api-access-tp7w8" (OuterVolumeSpecName: "kube-api-access-tp7w8") pod "57326abf-1247-4fc6-8679-1272a0a3119b" (UID: "57326abf-1247-4fc6-8679-1272a0a3119b"). InnerVolumeSpecName "kube-api-access-tp7w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.081530 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95a7afe1-52e0-41f3-934b-0eae308769e9-config" (OuterVolumeSpecName: "config") pod "95a7afe1-52e0-41f3-934b-0eae308769e9" (UID: "95a7afe1-52e0-41f3-934b-0eae308769e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.082148 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95a7afe1-52e0-41f3-934b-0eae308769e9-client-ca" (OuterVolumeSpecName: "client-ca") pod "95a7afe1-52e0-41f3-934b-0eae308769e9" (UID: "95a7afe1-52e0-41f3-934b-0eae308769e9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.084105 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57326abf-1247-4fc6-8679-1272a0a3119b-config" (OuterVolumeSpecName: "config") pod "57326abf-1247-4fc6-8679-1272a0a3119b" (UID: "57326abf-1247-4fc6-8679-1272a0a3119b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.085745 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95a7afe1-52e0-41f3-934b-0eae308769e9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "95a7afe1-52e0-41f3-934b-0eae308769e9" (UID: "95a7afe1-52e0-41f3-934b-0eae308769e9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.088338 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a7afe1-52e0-41f3-934b-0eae308769e9-kube-api-access-8wf79" (OuterVolumeSpecName: "kube-api-access-8wf79") pod "95a7afe1-52e0-41f3-934b-0eae308769e9" (UID: "95a7afe1-52e0-41f3-934b-0eae308769e9"). InnerVolumeSpecName "kube-api-access-8wf79". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.093631 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a7afe1-52e0-41f3-934b-0eae308769e9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "95a7afe1-52e0-41f3-934b-0eae308769e9" (UID: "95a7afe1-52e0-41f3-934b-0eae308769e9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.094654 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57326abf-1247-4fc6-8679-1272a0a3119b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "57326abf-1247-4fc6-8679-1272a0a3119b" (UID: "57326abf-1247-4fc6-8679-1272a0a3119b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.174108 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad0f587c-5158-472e-bbd9-ae49e51be1a7-client-ca\") pod \"route-controller-manager-7489674f54-7phd8\" (UID: \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.174175 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6ktr\" (UniqueName: \"kubernetes.io/projected/ad0f587c-5158-472e-bbd9-ae49e51be1a7-kube-api-access-k6ktr\") pod \"route-controller-manager-7489674f54-7phd8\" (UID: \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.174225 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad0f587c-5158-472e-bbd9-ae49e51be1a7-config\") pod \"route-controller-manager-7489674f54-7phd8\" (UID: \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.174250 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad0f587c-5158-472e-bbd9-ae49e51be1a7-serving-cert\") pod \"route-controller-manager-7489674f54-7phd8\" (UID: \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.174316 5030 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95a7afe1-52e0-41f3-934b-0eae308769e9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.174330 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57326abf-1247-4fc6-8679-1272a0a3119b-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.174339 5030 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57326abf-1247-4fc6-8679-1272a0a3119b-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.174352 5030 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95a7afe1-52e0-41f3-934b-0eae308769e9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.174364 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wf79\" (UniqueName: \"kubernetes.io/projected/95a7afe1-52e0-41f3-934b-0eae308769e9-kube-api-access-8wf79\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.174374 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp7w8\" (UniqueName: \"kubernetes.io/projected/57326abf-1247-4fc6-8679-1272a0a3119b-kube-api-access-tp7w8\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.174384 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95a7afe1-52e0-41f3-934b-0eae308769e9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.174393 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95a7afe1-52e0-41f3-934b-0eae308769e9-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.174402 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57326abf-1247-4fc6-8679-1272a0a3119b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.175199 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad0f587c-5158-472e-bbd9-ae49e51be1a7-client-ca\") pod \"route-controller-manager-7489674f54-7phd8\" (UID: \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.176417 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad0f587c-5158-472e-bbd9-ae49e51be1a7-config\") pod \"route-controller-manager-7489674f54-7phd8\" (UID: \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.181734 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad0f587c-5158-472e-bbd9-ae49e51be1a7-serving-cert\") pod \"route-controller-manager-7489674f54-7phd8\" (UID: \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.206615 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6ktr\" (UniqueName: \"kubernetes.io/projected/ad0f587c-5158-472e-bbd9-ae49e51be1a7-kube-api-access-k6ktr\") pod \"route-controller-manager-7489674f54-7phd8\" (UID: \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.330874 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f"] Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.334410 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bb4f5f444-mbz2f"] Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.350962 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.400690 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57326abf-1247-4fc6-8679-1272a0a3119b" path="/var/lib/kubelet/pods/57326abf-1247-4fc6-8679-1272a0a3119b/volumes" Nov 28 11:58:16 crc kubenswrapper[5030]: I1128 11:58:16.905929 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8"] Nov 28 11:58:17 crc kubenswrapper[5030]: I1128 11:58:17.032035 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" event={"ID":"95a7afe1-52e0-41f3-934b-0eae308769e9","Type":"ContainerDied","Data":"6bb67ffc66e80d33f09aa7439dd69ba229f1f1a35c7abe763eb9ef219148af28"} Nov 28 11:58:17 crc kubenswrapper[5030]: I1128 11:58:17.032098 5030 scope.go:117] "RemoveContainer" containerID="9cfdc6055b623acadb18a505c656c979bb891904ca6816de6fddceca7c882c2d" Nov 28 11:58:17 crc kubenswrapper[5030]: I1128 11:58:17.032108 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cdf944ff-vn72b" Nov 28 11:58:17 crc kubenswrapper[5030]: I1128 11:58:17.053147 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" event={"ID":"ad0f587c-5158-472e-bbd9-ae49e51be1a7","Type":"ContainerStarted","Data":"252ce90f8418691accb0bddc2333e66f6de481b02c806c69dd8df2ded50f63de"} Nov 28 11:58:17 crc kubenswrapper[5030]: I1128 11:58:17.057676 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cdf944ff-vn72b"] Nov 28 11:58:17 crc kubenswrapper[5030]: I1128 11:58:17.062296 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6cdf944ff-vn72b"] Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.066063 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" event={"ID":"ad0f587c-5158-472e-bbd9-ae49e51be1a7","Type":"ContainerStarted","Data":"27e8a1bee7bb1377aea3b52c562f025174f0b126d6114739d84822c3f6b351a0"} Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.066545 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.073580 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.085531 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" podStartSLOduration=5.085517119 podStartE2EDuration="5.085517119s" podCreationTimestamp="2025-11-28 11:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:58:18.083622228 +0000 UTC m=+316.025364911" watchObservedRunningTime="2025-11-28 11:58:18.085517119 +0000 UTC m=+316.027259792" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.398648 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95a7afe1-52e0-41f3-934b-0eae308769e9" path="/var/lib/kubelet/pods/95a7afe1-52e0-41f3-934b-0eae308769e9/volumes" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.674746 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f5f858bdf-klmvs"] Nov 28 11:58:18 crc kubenswrapper[5030]: E1128 11:58:18.674984 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a7afe1-52e0-41f3-934b-0eae308769e9" containerName="controller-manager" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.674998 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a7afe1-52e0-41f3-934b-0eae308769e9" containerName="controller-manager" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.675114 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a7afe1-52e0-41f3-934b-0eae308769e9" containerName="controller-manager" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.675547 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.682194 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.682298 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.686175 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.686458 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.688300 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.689793 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.692100 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f5f858bdf-klmvs"] Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.693417 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.801391 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzbpk\" (UniqueName: \"kubernetes.io/projected/ef431de7-275d-46ef-8530-be05fea5185c-kube-api-access-pzbpk\") pod \"controller-manager-6f5f858bdf-klmvs\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.801687 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef431de7-275d-46ef-8530-be05fea5185c-client-ca\") pod \"controller-manager-6f5f858bdf-klmvs\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.801743 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef431de7-275d-46ef-8530-be05fea5185c-proxy-ca-bundles\") pod \"controller-manager-6f5f858bdf-klmvs\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.801783 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef431de7-275d-46ef-8530-be05fea5185c-config\") pod \"controller-manager-6f5f858bdf-klmvs\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.802021 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef431de7-275d-46ef-8530-be05fea5185c-serving-cert\") pod \"controller-manager-6f5f858bdf-klmvs\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.903541 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzbpk\" (UniqueName: \"kubernetes.io/projected/ef431de7-275d-46ef-8530-be05fea5185c-kube-api-access-pzbpk\") pod \"controller-manager-6f5f858bdf-klmvs\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.903642 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef431de7-275d-46ef-8530-be05fea5185c-client-ca\") pod \"controller-manager-6f5f858bdf-klmvs\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.903693 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef431de7-275d-46ef-8530-be05fea5185c-config\") pod \"controller-manager-6f5f858bdf-klmvs\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.903724 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef431de7-275d-46ef-8530-be05fea5185c-proxy-ca-bundles\") pod \"controller-manager-6f5f858bdf-klmvs\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.903756 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef431de7-275d-46ef-8530-be05fea5185c-serving-cert\") pod \"controller-manager-6f5f858bdf-klmvs\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.905449 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef431de7-275d-46ef-8530-be05fea5185c-client-ca\") pod \"controller-manager-6f5f858bdf-klmvs\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.905903 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef431de7-275d-46ef-8530-be05fea5185c-proxy-ca-bundles\") pod \"controller-manager-6f5f858bdf-klmvs\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.906905 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef431de7-275d-46ef-8530-be05fea5185c-config\") pod \"controller-manager-6f5f858bdf-klmvs\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.911167 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef431de7-275d-46ef-8530-be05fea5185c-serving-cert\") pod \"controller-manager-6f5f858bdf-klmvs\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.922691 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzbpk\" (UniqueName: \"kubernetes.io/projected/ef431de7-275d-46ef-8530-be05fea5185c-kube-api-access-pzbpk\") pod \"controller-manager-6f5f858bdf-klmvs\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:18 crc kubenswrapper[5030]: I1128 11:58:18.993215 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:19 crc kubenswrapper[5030]: I1128 11:58:19.246105 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f5f858bdf-klmvs"] Nov 28 11:58:20 crc kubenswrapper[5030]: I1128 11:58:20.088301 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" event={"ID":"ef431de7-275d-46ef-8530-be05fea5185c","Type":"ContainerStarted","Data":"0f159421182a86d76bd11ccdd4683edf72b2afac7c325da104b099aa8c4bab7d"} Nov 28 11:58:20 crc kubenswrapper[5030]: I1128 11:58:20.088729 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" event={"ID":"ef431de7-275d-46ef-8530-be05fea5185c","Type":"ContainerStarted","Data":"29c0cd829b03277f929f50c3d000258a8cdc0461ada4c1d27bfb2af947a60fd6"} Nov 28 11:58:20 crc kubenswrapper[5030]: I1128 11:58:20.108069 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" podStartSLOduration=7.108043746 podStartE2EDuration="7.108043746s" podCreationTimestamp="2025-11-28 11:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:58:20.107330707 +0000 UTC m=+318.049073390" watchObservedRunningTime="2025-11-28 11:58:20.108043746 +0000 UTC m=+318.049786429" Nov 28 11:58:21 crc kubenswrapper[5030]: I1128 11:58:21.094420 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:21 crc kubenswrapper[5030]: I1128 11:58:21.100166 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:26 crc kubenswrapper[5030]: I1128 11:58:26.103212 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f5f858bdf-klmvs"] Nov 28 11:58:26 crc kubenswrapper[5030]: I1128 11:58:26.104201 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" podUID="ef431de7-275d-46ef-8530-be05fea5185c" containerName="controller-manager" containerID="cri-o://0f159421182a86d76bd11ccdd4683edf72b2afac7c325da104b099aa8c4bab7d" gracePeriod=30 Nov 28 11:58:26 crc kubenswrapper[5030]: I1128 11:58:26.115224 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8"] Nov 28 11:58:26 crc kubenswrapper[5030]: I1128 11:58:26.115603 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" podUID="ad0f587c-5158-472e-bbd9-ae49e51be1a7" containerName="route-controller-manager" containerID="cri-o://27e8a1bee7bb1377aea3b52c562f025174f0b126d6114739d84822c3f6b351a0" gracePeriod=30 Nov 28 11:58:26 crc kubenswrapper[5030]: E1128 11:58:26.347411 5030 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad0f587c_5158_472e_bbd9_ae49e51be1a7.slice/crio-conmon-27e8a1bee7bb1377aea3b52c562f025174f0b126d6114739d84822c3f6b351a0.scope\": RecentStats: unable to find data in memory cache]" Nov 28 11:58:26 crc kubenswrapper[5030]: I1128 11:58:26.352087 5030 patch_prober.go:28] interesting pod/route-controller-manager-7489674f54-7phd8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Nov 28 11:58:26 crc kubenswrapper[5030]: I1128 11:58:26.352156 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" podUID="ad0f587c-5158-472e-bbd9-ae49e51be1a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.145849 5030 generic.go:334] "Generic (PLEG): container finished" podID="ef431de7-275d-46ef-8530-be05fea5185c" containerID="0f159421182a86d76bd11ccdd4683edf72b2afac7c325da104b099aa8c4bab7d" exitCode=0 Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.145939 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" event={"ID":"ef431de7-275d-46ef-8530-be05fea5185c","Type":"ContainerDied","Data":"0f159421182a86d76bd11ccdd4683edf72b2afac7c325da104b099aa8c4bab7d"} Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.148651 5030 generic.go:334] "Generic (PLEG): container finished" podID="ad0f587c-5158-472e-bbd9-ae49e51be1a7" containerID="27e8a1bee7bb1377aea3b52c562f025174f0b126d6114739d84822c3f6b351a0" exitCode=0 Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.148751 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" event={"ID":"ad0f587c-5158-472e-bbd9-ae49e51be1a7","Type":"ContainerDied","Data":"27e8a1bee7bb1377aea3b52c562f025174f0b126d6114739d84822c3f6b351a0"} Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.283431 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.325948 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g"] Nov 28 11:58:27 crc kubenswrapper[5030]: E1128 11:58:27.326261 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0f587c-5158-472e-bbd9-ae49e51be1a7" containerName="route-controller-manager" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.326276 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0f587c-5158-472e-bbd9-ae49e51be1a7" containerName="route-controller-manager" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.326423 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0f587c-5158-472e-bbd9-ae49e51be1a7" containerName="route-controller-manager" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.326995 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.335942 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad0f587c-5158-472e-bbd9-ae49e51be1a7-client-ca\") pod \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\" (UID: \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\") " Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.337333 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad0f587c-5158-472e-bbd9-ae49e51be1a7-client-ca" (OuterVolumeSpecName: "client-ca") pod "ad0f587c-5158-472e-bbd9-ae49e51be1a7" (UID: "ad0f587c-5158-472e-bbd9-ae49e51be1a7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.341002 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad0f587c-5158-472e-bbd9-ae49e51be1a7-serving-cert\") pod \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\" (UID: \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\") " Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.341082 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad0f587c-5158-472e-bbd9-ae49e51be1a7-config\") pod \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\" (UID: \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\") " Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.341175 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6ktr\" (UniqueName: \"kubernetes.io/projected/ad0f587c-5158-472e-bbd9-ae49e51be1a7-kube-api-access-k6ktr\") pod \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\" (UID: \"ad0f587c-5158-472e-bbd9-ae49e51be1a7\") " Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.341498 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfgpk\" (UniqueName: \"kubernetes.io/projected/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-kube-api-access-nfgpk\") pod \"route-controller-manager-69777494bf-89c8g\" (UID: \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\") " pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.341562 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-serving-cert\") pod \"route-controller-manager-69777494bf-89c8g\" (UID: \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\") " pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.341615 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-client-ca\") pod \"route-controller-manager-69777494bf-89c8g\" (UID: \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\") " pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.341651 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-config\") pod \"route-controller-manager-69777494bf-89c8g\" (UID: \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\") " pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.341793 5030 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad0f587c-5158-472e-bbd9-ae49e51be1a7-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.343133 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad0f587c-5158-472e-bbd9-ae49e51be1a7-config" (OuterVolumeSpecName: "config") pod "ad0f587c-5158-472e-bbd9-ae49e51be1a7" (UID: "ad0f587c-5158-472e-bbd9-ae49e51be1a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.344693 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g"] Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.348632 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.353733 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad0f587c-5158-472e-bbd9-ae49e51be1a7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ad0f587c-5158-472e-bbd9-ae49e51be1a7" (UID: "ad0f587c-5158-472e-bbd9-ae49e51be1a7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.356685 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad0f587c-5158-472e-bbd9-ae49e51be1a7-kube-api-access-k6ktr" (OuterVolumeSpecName: "kube-api-access-k6ktr") pod "ad0f587c-5158-472e-bbd9-ae49e51be1a7" (UID: "ad0f587c-5158-472e-bbd9-ae49e51be1a7"). InnerVolumeSpecName "kube-api-access-k6ktr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.443151 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef431de7-275d-46ef-8530-be05fea5185c-client-ca\") pod \"ef431de7-275d-46ef-8530-be05fea5185c\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.443618 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef431de7-275d-46ef-8530-be05fea5185c-proxy-ca-bundles\") pod \"ef431de7-275d-46ef-8530-be05fea5185c\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.443668 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzbpk\" (UniqueName: \"kubernetes.io/projected/ef431de7-275d-46ef-8530-be05fea5185c-kube-api-access-pzbpk\") pod \"ef431de7-275d-46ef-8530-be05fea5185c\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.443730 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef431de7-275d-46ef-8530-be05fea5185c-serving-cert\") pod \"ef431de7-275d-46ef-8530-be05fea5185c\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.443795 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef431de7-275d-46ef-8530-be05fea5185c-config\") pod \"ef431de7-275d-46ef-8530-be05fea5185c\" (UID: \"ef431de7-275d-46ef-8530-be05fea5185c\") " Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.443965 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-config\") pod \"route-controller-manager-69777494bf-89c8g\" (UID: \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\") " pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.444041 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfgpk\" (UniqueName: \"kubernetes.io/projected/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-kube-api-access-nfgpk\") pod \"route-controller-manager-69777494bf-89c8g\" (UID: \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\") " pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.444075 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-serving-cert\") pod \"route-controller-manager-69777494bf-89c8g\" (UID: \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\") " pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.444107 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-client-ca\") pod \"route-controller-manager-69777494bf-89c8g\" (UID: \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\") " pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.444147 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad0f587c-5158-472e-bbd9-ae49e51be1a7-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.444161 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad0f587c-5158-472e-bbd9-ae49e51be1a7-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.444172 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6ktr\" (UniqueName: \"kubernetes.io/projected/ad0f587c-5158-472e-bbd9-ae49e51be1a7-kube-api-access-k6ktr\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.444516 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef431de7-275d-46ef-8530-be05fea5185c-client-ca" (OuterVolumeSpecName: "client-ca") pod "ef431de7-275d-46ef-8530-be05fea5185c" (UID: "ef431de7-275d-46ef-8530-be05fea5185c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.445218 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-client-ca\") pod \"route-controller-manager-69777494bf-89c8g\" (UID: \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\") " pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.445216 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef431de7-275d-46ef-8530-be05fea5185c-config" (OuterVolumeSpecName: "config") pod "ef431de7-275d-46ef-8530-be05fea5185c" (UID: "ef431de7-275d-46ef-8530-be05fea5185c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.445872 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef431de7-275d-46ef-8530-be05fea5185c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ef431de7-275d-46ef-8530-be05fea5185c" (UID: "ef431de7-275d-46ef-8530-be05fea5185c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.446143 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-config\") pod \"route-controller-manager-69777494bf-89c8g\" (UID: \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\") " pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.450267 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-serving-cert\") pod \"route-controller-manager-69777494bf-89c8g\" (UID: \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\") " pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.451109 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef431de7-275d-46ef-8530-be05fea5185c-kube-api-access-pzbpk" (OuterVolumeSpecName: "kube-api-access-pzbpk") pod "ef431de7-275d-46ef-8530-be05fea5185c" (UID: "ef431de7-275d-46ef-8530-be05fea5185c"). InnerVolumeSpecName "kube-api-access-pzbpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.451326 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef431de7-275d-46ef-8530-be05fea5185c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ef431de7-275d-46ef-8530-be05fea5185c" (UID: "ef431de7-275d-46ef-8530-be05fea5185c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.464354 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfgpk\" (UniqueName: \"kubernetes.io/projected/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-kube-api-access-nfgpk\") pod \"route-controller-manager-69777494bf-89c8g\" (UID: \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\") " pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.545259 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef431de7-275d-46ef-8530-be05fea5185c-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.545294 5030 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef431de7-275d-46ef-8530-be05fea5185c-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.545304 5030 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef431de7-275d-46ef-8530-be05fea5185c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.545319 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzbpk\" (UniqueName: \"kubernetes.io/projected/ef431de7-275d-46ef-8530-be05fea5185c-kube-api-access-pzbpk\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.545337 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef431de7-275d-46ef-8530-be05fea5185c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:27 crc kubenswrapper[5030]: I1128 11:58:27.697725 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" Nov 28 11:58:28 crc kubenswrapper[5030]: I1128 11:58:28.035869 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g"] Nov 28 11:58:28 crc kubenswrapper[5030]: I1128 11:58:28.157202 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" event={"ID":"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a","Type":"ContainerStarted","Data":"0066d3b036f70ef5c8af44ea19b5af9ef0ab8c863d9b16d1bfbe66534174a6db"} Nov 28 11:58:28 crc kubenswrapper[5030]: I1128 11:58:28.162679 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" Nov 28 11:58:28 crc kubenswrapper[5030]: I1128 11:58:28.162684 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f5f858bdf-klmvs" event={"ID":"ef431de7-275d-46ef-8530-be05fea5185c","Type":"ContainerDied","Data":"29c0cd829b03277f929f50c3d000258a8cdc0461ada4c1d27bfb2af947a60fd6"} Nov 28 11:58:28 crc kubenswrapper[5030]: I1128 11:58:28.162828 5030 scope.go:117] "RemoveContainer" containerID="0f159421182a86d76bd11ccdd4683edf72b2afac7c325da104b099aa8c4bab7d" Nov 28 11:58:28 crc kubenswrapper[5030]: I1128 11:58:28.165902 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" event={"ID":"ad0f587c-5158-472e-bbd9-ae49e51be1a7","Type":"ContainerDied","Data":"252ce90f8418691accb0bddc2333e66f6de481b02c806c69dd8df2ded50f63de"} Nov 28 11:58:28 crc kubenswrapper[5030]: I1128 11:58:28.166038 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8" Nov 28 11:58:28 crc kubenswrapper[5030]: I1128 11:58:28.197304 5030 scope.go:117] "RemoveContainer" containerID="27e8a1bee7bb1377aea3b52c562f025174f0b126d6114739d84822c3f6b351a0" Nov 28 11:58:28 crc kubenswrapper[5030]: I1128 11:58:28.244334 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f5f858bdf-klmvs"] Nov 28 11:58:28 crc kubenswrapper[5030]: I1128 11:58:28.271161 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f5f858bdf-klmvs"] Nov 28 11:58:28 crc kubenswrapper[5030]: I1128 11:58:28.290548 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8"] Nov 28 11:58:28 crc kubenswrapper[5030]: I1128 11:58:28.297963 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7489674f54-7phd8"] Nov 28 11:58:28 crc kubenswrapper[5030]: I1128 11:58:28.401041 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad0f587c-5158-472e-bbd9-ae49e51be1a7" path="/var/lib/kubelet/pods/ad0f587c-5158-472e-bbd9-ae49e51be1a7/volumes" Nov 28 11:58:28 crc kubenswrapper[5030]: I1128 11:58:28.401718 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef431de7-275d-46ef-8530-be05fea5185c" path="/var/lib/kubelet/pods/ef431de7-275d-46ef-8530-be05fea5185c/volumes" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.179061 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" event={"ID":"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a","Type":"ContainerStarted","Data":"aedc73b9770c77ba062b6f4cb78b533148f3c6e3d2dc029ad236d1e6a0655c86"} Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.180500 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.188835 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.207931 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" podStartSLOduration=3.207900347 podStartE2EDuration="3.207900347s" podCreationTimestamp="2025-11-28 11:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:58:29.203749784 +0000 UTC m=+327.145492477" watchObservedRunningTime="2025-11-28 11:58:29.207900347 +0000 UTC m=+327.149643030" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.685856 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-96687df4b-kw5g4"] Nov 28 11:58:29 crc kubenswrapper[5030]: E1128 11:58:29.686430 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef431de7-275d-46ef-8530-be05fea5185c" containerName="controller-manager" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.686444 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef431de7-275d-46ef-8530-be05fea5185c" containerName="controller-manager" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.686583 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef431de7-275d-46ef-8530-be05fea5185c" containerName="controller-manager" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.687015 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.694145 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.696461 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.696834 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.697348 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.697429 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.697539 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.703406 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.711751 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-96687df4b-kw5g4"] Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.796428 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6ba71a8-9eaf-4378-b348-52e150075b8e-serving-cert\") pod \"controller-manager-96687df4b-kw5g4\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.796509 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6ba71a8-9eaf-4378-b348-52e150075b8e-proxy-ca-bundles\") pod \"controller-manager-96687df4b-kw5g4\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.796539 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6ba71a8-9eaf-4378-b348-52e150075b8e-client-ca\") pod \"controller-manager-96687df4b-kw5g4\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.796576 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njz22\" (UniqueName: \"kubernetes.io/projected/d6ba71a8-9eaf-4378-b348-52e150075b8e-kube-api-access-njz22\") pod \"controller-manager-96687df4b-kw5g4\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.796776 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6ba71a8-9eaf-4378-b348-52e150075b8e-config\") pod \"controller-manager-96687df4b-kw5g4\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.898632 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njz22\" (UniqueName: \"kubernetes.io/projected/d6ba71a8-9eaf-4378-b348-52e150075b8e-kube-api-access-njz22\") pod \"controller-manager-96687df4b-kw5g4\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.898743 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6ba71a8-9eaf-4378-b348-52e150075b8e-config\") pod \"controller-manager-96687df4b-kw5g4\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.898824 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6ba71a8-9eaf-4378-b348-52e150075b8e-serving-cert\") pod \"controller-manager-96687df4b-kw5g4\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.898849 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6ba71a8-9eaf-4378-b348-52e150075b8e-proxy-ca-bundles\") pod \"controller-manager-96687df4b-kw5g4\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.898878 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6ba71a8-9eaf-4378-b348-52e150075b8e-client-ca\") pod \"controller-manager-96687df4b-kw5g4\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.900382 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6ba71a8-9eaf-4378-b348-52e150075b8e-client-ca\") pod \"controller-manager-96687df4b-kw5g4\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.901399 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6ba71a8-9eaf-4378-b348-52e150075b8e-proxy-ca-bundles\") pod \"controller-manager-96687df4b-kw5g4\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.901837 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6ba71a8-9eaf-4378-b348-52e150075b8e-config\") pod \"controller-manager-96687df4b-kw5g4\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.915647 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6ba71a8-9eaf-4378-b348-52e150075b8e-serving-cert\") pod \"controller-manager-96687df4b-kw5g4\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:29 crc kubenswrapper[5030]: I1128 11:58:29.921998 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njz22\" (UniqueName: \"kubernetes.io/projected/d6ba71a8-9eaf-4378-b348-52e150075b8e-kube-api-access-njz22\") pod \"controller-manager-96687df4b-kw5g4\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:30 crc kubenswrapper[5030]: I1128 11:58:30.008592 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:30 crc kubenswrapper[5030]: I1128 11:58:30.513223 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-96687df4b-kw5g4"] Nov 28 11:58:31 crc kubenswrapper[5030]: I1128 11:58:31.196114 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" event={"ID":"d6ba71a8-9eaf-4378-b348-52e150075b8e","Type":"ContainerStarted","Data":"074a648775549075d03b827c4c4b8c79aa980c067450c54e5b9824b21bac654c"} Nov 28 11:58:31 crc kubenswrapper[5030]: I1128 11:58:31.196489 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" event={"ID":"d6ba71a8-9eaf-4378-b348-52e150075b8e","Type":"ContainerStarted","Data":"52e5a4fb55389ea490e0ddfc5f8b2fbcfb431f56fbed3ba68aef6a9c58a21c10"} Nov 28 11:58:31 crc kubenswrapper[5030]: I1128 11:58:31.224848 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" podStartSLOduration=5.224827019 podStartE2EDuration="5.224827019s" podCreationTimestamp="2025-11-28 11:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:58:31.224719666 +0000 UTC m=+329.166462369" watchObservedRunningTime="2025-11-28 11:58:31.224827019 +0000 UTC m=+329.166569712" Nov 28 11:58:31 crc kubenswrapper[5030]: I1128 11:58:31.729673 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5285v"] Nov 28 11:58:31 crc kubenswrapper[5030]: I1128 11:58:31.731114 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:31 crc kubenswrapper[5030]: I1128 11:58:31.802764 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5285v"] Nov 28 11:58:31 crc kubenswrapper[5030]: I1128 11:58:31.933218 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/37618550-7617-4054-8eac-e00351f88f16-registry-certificates\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:31 crc kubenswrapper[5030]: I1128 11:58:31.933577 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8tk2\" (UniqueName: \"kubernetes.io/projected/37618550-7617-4054-8eac-e00351f88f16-kube-api-access-x8tk2\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:31 crc kubenswrapper[5030]: I1128 11:58:31.933681 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37618550-7617-4054-8eac-e00351f88f16-trusted-ca\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:31 crc kubenswrapper[5030]: I1128 11:58:31.933768 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/37618550-7617-4054-8eac-e00351f88f16-registry-tls\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:31 crc kubenswrapper[5030]: I1128 11:58:31.933856 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/37618550-7617-4054-8eac-e00351f88f16-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:31 crc kubenswrapper[5030]: I1128 11:58:31.933944 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/37618550-7617-4054-8eac-e00351f88f16-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:31 crc kubenswrapper[5030]: I1128 11:58:31.934025 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37618550-7617-4054-8eac-e00351f88f16-bound-sa-token\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:31 crc kubenswrapper[5030]: I1128 11:58:31.934149 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:31 crc kubenswrapper[5030]: I1128 11:58:31.973274 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:32 crc kubenswrapper[5030]: I1128 11:58:32.035543 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/37618550-7617-4054-8eac-e00351f88f16-registry-certificates\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:32 crc kubenswrapper[5030]: I1128 11:58:32.035867 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8tk2\" (UniqueName: \"kubernetes.io/projected/37618550-7617-4054-8eac-e00351f88f16-kube-api-access-x8tk2\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:32 crc kubenswrapper[5030]: I1128 11:58:32.035956 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37618550-7617-4054-8eac-e00351f88f16-trusted-ca\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:32 crc kubenswrapper[5030]: I1128 11:58:32.036042 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/37618550-7617-4054-8eac-e00351f88f16-registry-tls\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:32 crc kubenswrapper[5030]: I1128 11:58:32.036149 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/37618550-7617-4054-8eac-e00351f88f16-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:32 crc kubenswrapper[5030]: I1128 11:58:32.036379 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/37618550-7617-4054-8eac-e00351f88f16-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:32 crc kubenswrapper[5030]: I1128 11:58:32.037403 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37618550-7617-4054-8eac-e00351f88f16-trusted-ca\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:32 crc kubenswrapper[5030]: I1128 11:58:32.037012 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/37618550-7617-4054-8eac-e00351f88f16-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:32 crc kubenswrapper[5030]: I1128 11:58:32.037407 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37618550-7617-4054-8eac-e00351f88f16-bound-sa-token\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:32 crc kubenswrapper[5030]: I1128 11:58:32.037241 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/37618550-7617-4054-8eac-e00351f88f16-registry-certificates\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:32 crc kubenswrapper[5030]: I1128 11:58:32.042518 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/37618550-7617-4054-8eac-e00351f88f16-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:32 crc kubenswrapper[5030]: I1128 11:58:32.042805 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/37618550-7617-4054-8eac-e00351f88f16-registry-tls\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:32 crc kubenswrapper[5030]: I1128 11:58:32.050839 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8tk2\" (UniqueName: \"kubernetes.io/projected/37618550-7617-4054-8eac-e00351f88f16-kube-api-access-x8tk2\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:32 crc kubenswrapper[5030]: I1128 11:58:32.051731 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37618550-7617-4054-8eac-e00351f88f16-bound-sa-token\") pod \"image-registry-66df7c8f76-5285v\" (UID: \"37618550-7617-4054-8eac-e00351f88f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:32 crc kubenswrapper[5030]: I1128 11:58:32.101510 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:32 crc kubenswrapper[5030]: I1128 11:58:32.202931 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:32 crc kubenswrapper[5030]: I1128 11:58:32.218357 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:32 crc kubenswrapper[5030]: I1128 11:58:32.339669 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5285v"] Nov 28 11:58:32 crc kubenswrapper[5030]: W1128 11:58:32.349816 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37618550_7617_4054_8eac_e00351f88f16.slice/crio-271f89a7e51c04b598e35b5db43a482dde6366044b7e5ea6a81bd67bad7ad1af WatchSource:0}: Error finding container 271f89a7e51c04b598e35b5db43a482dde6366044b7e5ea6a81bd67bad7ad1af: Status 404 returned error can't find the container with id 271f89a7e51c04b598e35b5db43a482dde6366044b7e5ea6a81bd67bad7ad1af Nov 28 11:58:33 crc kubenswrapper[5030]: I1128 11:58:33.211006 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5285v" event={"ID":"37618550-7617-4054-8eac-e00351f88f16","Type":"ContainerStarted","Data":"9b2a38f99d25a59d73a6d854a6ae9e3a25052148f410363aa7e76eecbec90f2c"} Nov 28 11:58:33 crc kubenswrapper[5030]: I1128 11:58:33.211361 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5285v" event={"ID":"37618550-7617-4054-8eac-e00351f88f16","Type":"ContainerStarted","Data":"271f89a7e51c04b598e35b5db43a482dde6366044b7e5ea6a81bd67bad7ad1af"} Nov 28 11:58:33 crc kubenswrapper[5030]: I1128 11:58:33.241975 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-5285v" podStartSLOduration=2.241951427 podStartE2EDuration="2.241951427s" podCreationTimestamp="2025-11-28 11:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:58:33.237814845 +0000 UTC m=+331.179557568" watchObservedRunningTime="2025-11-28 11:58:33.241951427 +0000 UTC m=+331.183694150" Nov 28 11:58:34 crc kubenswrapper[5030]: I1128 11:58:34.221563 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:49 crc kubenswrapper[5030]: I1128 11:58:49.404716 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-96687df4b-kw5g4"] Nov 28 11:58:49 crc kubenswrapper[5030]: I1128 11:58:49.405576 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" podUID="d6ba71a8-9eaf-4378-b348-52e150075b8e" containerName="controller-manager" containerID="cri-o://074a648775549075d03b827c4c4b8c79aa980c067450c54e5b9824b21bac654c" gracePeriod=30 Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.055510 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.128696 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njz22\" (UniqueName: \"kubernetes.io/projected/d6ba71a8-9eaf-4378-b348-52e150075b8e-kube-api-access-njz22\") pod \"d6ba71a8-9eaf-4378-b348-52e150075b8e\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.128802 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6ba71a8-9eaf-4378-b348-52e150075b8e-config\") pod \"d6ba71a8-9eaf-4378-b348-52e150075b8e\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.128829 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6ba71a8-9eaf-4378-b348-52e150075b8e-client-ca\") pod \"d6ba71a8-9eaf-4378-b348-52e150075b8e\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.128933 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6ba71a8-9eaf-4378-b348-52e150075b8e-proxy-ca-bundles\") pod \"d6ba71a8-9eaf-4378-b348-52e150075b8e\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.129035 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6ba71a8-9eaf-4378-b348-52e150075b8e-serving-cert\") pod \"d6ba71a8-9eaf-4378-b348-52e150075b8e\" (UID: \"d6ba71a8-9eaf-4378-b348-52e150075b8e\") " Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.130129 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6ba71a8-9eaf-4378-b348-52e150075b8e-client-ca" (OuterVolumeSpecName: "client-ca") pod "d6ba71a8-9eaf-4378-b348-52e150075b8e" (UID: "d6ba71a8-9eaf-4378-b348-52e150075b8e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.130145 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6ba71a8-9eaf-4378-b348-52e150075b8e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d6ba71a8-9eaf-4378-b348-52e150075b8e" (UID: "d6ba71a8-9eaf-4378-b348-52e150075b8e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.131291 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6ba71a8-9eaf-4378-b348-52e150075b8e-config" (OuterVolumeSpecName: "config") pod "d6ba71a8-9eaf-4378-b348-52e150075b8e" (UID: "d6ba71a8-9eaf-4378-b348-52e150075b8e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.137810 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ba71a8-9eaf-4378-b348-52e150075b8e-kube-api-access-njz22" (OuterVolumeSpecName: "kube-api-access-njz22") pod "d6ba71a8-9eaf-4378-b348-52e150075b8e" (UID: "d6ba71a8-9eaf-4378-b348-52e150075b8e"). InnerVolumeSpecName "kube-api-access-njz22". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.156744 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ba71a8-9eaf-4378-b348-52e150075b8e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d6ba71a8-9eaf-4378-b348-52e150075b8e" (UID: "d6ba71a8-9eaf-4378-b348-52e150075b8e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.231399 5030 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6ba71a8-9eaf-4378-b348-52e150075b8e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.231446 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6ba71a8-9eaf-4378-b348-52e150075b8e-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.231458 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njz22\" (UniqueName: \"kubernetes.io/projected/d6ba71a8-9eaf-4378-b348-52e150075b8e-kube-api-access-njz22\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.231482 5030 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6ba71a8-9eaf-4378-b348-52e150075b8e-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.231493 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6ba71a8-9eaf-4378-b348-52e150075b8e-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.384174 5030 generic.go:334] "Generic (PLEG): container finished" podID="d6ba71a8-9eaf-4378-b348-52e150075b8e" containerID="074a648775549075d03b827c4c4b8c79aa980c067450c54e5b9824b21bac654c" exitCode=0 Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.384271 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.384300 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" event={"ID":"d6ba71a8-9eaf-4378-b348-52e150075b8e","Type":"ContainerDied","Data":"074a648775549075d03b827c4c4b8c79aa980c067450c54e5b9824b21bac654c"} Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.384813 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" event={"ID":"d6ba71a8-9eaf-4378-b348-52e150075b8e","Type":"ContainerDied","Data":"52e5a4fb55389ea490e0ddfc5f8b2fbcfb431f56fbed3ba68aef6a9c58a21c10"} Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.384837 5030 scope.go:117] "RemoveContainer" containerID="074a648775549075d03b827c4c4b8c79aa980c067450c54e5b9824b21bac654c" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.405144 5030 scope.go:117] "RemoveContainer" containerID="074a648775549075d03b827c4c4b8c79aa980c067450c54e5b9824b21bac654c" Nov 28 11:58:50 crc kubenswrapper[5030]: E1128 11:58:50.405806 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"074a648775549075d03b827c4c4b8c79aa980c067450c54e5b9824b21bac654c\": container with ID starting with 074a648775549075d03b827c4c4b8c79aa980c067450c54e5b9824b21bac654c not found: ID does not exist" containerID="074a648775549075d03b827c4c4b8c79aa980c067450c54e5b9824b21bac654c" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.405882 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"074a648775549075d03b827c4c4b8c79aa980c067450c54e5b9824b21bac654c"} err="failed to get container status \"074a648775549075d03b827c4c4b8c79aa980c067450c54e5b9824b21bac654c\": rpc error: code = NotFound desc = could not find container \"074a648775549075d03b827c4c4b8c79aa980c067450c54e5b9824b21bac654c\": container with ID starting with 074a648775549075d03b827c4c4b8c79aa980c067450c54e5b9824b21bac654c not found: ID does not exist" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.426806 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-96687df4b-kw5g4"] Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.433614 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-96687df4b-kw5g4"] Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.712363 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj"] Nov 28 11:58:50 crc kubenswrapper[5030]: E1128 11:58:50.713071 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ba71a8-9eaf-4378-b348-52e150075b8e" containerName="controller-manager" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.713102 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ba71a8-9eaf-4378-b348-52e150075b8e" containerName="controller-manager" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.713233 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ba71a8-9eaf-4378-b348-52e150075b8e" containerName="controller-manager" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.713852 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.716944 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.717232 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.717654 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.718760 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.720799 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.727193 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.727876 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.732735 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj"] Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.872347 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smwhn\" (UniqueName: \"kubernetes.io/projected/69edcb28-13c3-4a70-ade7-3fd0a561aed2-kube-api-access-smwhn\") pod \"controller-manager-6f5f858bdf-6dgtj\" (UID: \"69edcb28-13c3-4a70-ade7-3fd0a561aed2\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.872408 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69edcb28-13c3-4a70-ade7-3fd0a561aed2-proxy-ca-bundles\") pod \"controller-manager-6f5f858bdf-6dgtj\" (UID: \"69edcb28-13c3-4a70-ade7-3fd0a561aed2\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.872441 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69edcb28-13c3-4a70-ade7-3fd0a561aed2-serving-cert\") pod \"controller-manager-6f5f858bdf-6dgtj\" (UID: \"69edcb28-13c3-4a70-ade7-3fd0a561aed2\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.872478 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69edcb28-13c3-4a70-ade7-3fd0a561aed2-client-ca\") pod \"controller-manager-6f5f858bdf-6dgtj\" (UID: \"69edcb28-13c3-4a70-ade7-3fd0a561aed2\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.872514 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69edcb28-13c3-4a70-ade7-3fd0a561aed2-config\") pod \"controller-manager-6f5f858bdf-6dgtj\" (UID: \"69edcb28-13c3-4a70-ade7-3fd0a561aed2\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.975890 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smwhn\" (UniqueName: \"kubernetes.io/projected/69edcb28-13c3-4a70-ade7-3fd0a561aed2-kube-api-access-smwhn\") pod \"controller-manager-6f5f858bdf-6dgtj\" (UID: \"69edcb28-13c3-4a70-ade7-3fd0a561aed2\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.975952 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69edcb28-13c3-4a70-ade7-3fd0a561aed2-proxy-ca-bundles\") pod \"controller-manager-6f5f858bdf-6dgtj\" (UID: \"69edcb28-13c3-4a70-ade7-3fd0a561aed2\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.975991 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69edcb28-13c3-4a70-ade7-3fd0a561aed2-serving-cert\") pod \"controller-manager-6f5f858bdf-6dgtj\" (UID: \"69edcb28-13c3-4a70-ade7-3fd0a561aed2\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.976018 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69edcb28-13c3-4a70-ade7-3fd0a561aed2-client-ca\") pod \"controller-manager-6f5f858bdf-6dgtj\" (UID: \"69edcb28-13c3-4a70-ade7-3fd0a561aed2\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.976045 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69edcb28-13c3-4a70-ade7-3fd0a561aed2-config\") pod \"controller-manager-6f5f858bdf-6dgtj\" (UID: \"69edcb28-13c3-4a70-ade7-3fd0a561aed2\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.977687 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69edcb28-13c3-4a70-ade7-3fd0a561aed2-config\") pod \"controller-manager-6f5f858bdf-6dgtj\" (UID: \"69edcb28-13c3-4a70-ade7-3fd0a561aed2\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.978313 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69edcb28-13c3-4a70-ade7-3fd0a561aed2-client-ca\") pod \"controller-manager-6f5f858bdf-6dgtj\" (UID: \"69edcb28-13c3-4a70-ade7-3fd0a561aed2\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.979168 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69edcb28-13c3-4a70-ade7-3fd0a561aed2-proxy-ca-bundles\") pod \"controller-manager-6f5f858bdf-6dgtj\" (UID: \"69edcb28-13c3-4a70-ade7-3fd0a561aed2\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.983140 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69edcb28-13c3-4a70-ade7-3fd0a561aed2-serving-cert\") pod \"controller-manager-6f5f858bdf-6dgtj\" (UID: \"69edcb28-13c3-4a70-ade7-3fd0a561aed2\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" Nov 28 11:58:50 crc kubenswrapper[5030]: I1128 11:58:50.992958 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smwhn\" (UniqueName: \"kubernetes.io/projected/69edcb28-13c3-4a70-ade7-3fd0a561aed2-kube-api-access-smwhn\") pod \"controller-manager-6f5f858bdf-6dgtj\" (UID: \"69edcb28-13c3-4a70-ade7-3fd0a561aed2\") " pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" Nov 28 11:58:51 crc kubenswrapper[5030]: I1128 11:58:51.009448 5030 patch_prober.go:28] interesting pod/controller-manager-96687df4b-kw5g4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 11:58:51 crc kubenswrapper[5030]: I1128 11:58:51.009563 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-96687df4b-kw5g4" podUID="d6ba71a8-9eaf-4378-b348-52e150075b8e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 11:58:51 crc kubenswrapper[5030]: I1128 11:58:51.084835 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" Nov 28 11:58:51 crc kubenswrapper[5030]: I1128 11:58:51.547556 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj"] Nov 28 11:58:51 crc kubenswrapper[5030]: W1128 11:58:51.555607 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69edcb28_13c3_4a70_ade7_3fd0a561aed2.slice/crio-a8980341d2d2e99aa6147da97ee65d35798c4d56e812a53728fabcf0020c71f1 WatchSource:0}: Error finding container a8980341d2d2e99aa6147da97ee65d35798c4d56e812a53728fabcf0020c71f1: Status 404 returned error can't find the container with id a8980341d2d2e99aa6147da97ee65d35798c4d56e812a53728fabcf0020c71f1 Nov 28 11:58:52 crc kubenswrapper[5030]: I1128 11:58:52.109533 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-5285v" Nov 28 11:58:52 crc kubenswrapper[5030]: I1128 11:58:52.179101 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8vhfh"] Nov 28 11:58:52 crc kubenswrapper[5030]: I1128 11:58:52.420772 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6ba71a8-9eaf-4378-b348-52e150075b8e" path="/var/lib/kubelet/pods/d6ba71a8-9eaf-4378-b348-52e150075b8e/volumes" Nov 28 11:58:52 crc kubenswrapper[5030]: I1128 11:58:52.422193 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" event={"ID":"69edcb28-13c3-4a70-ade7-3fd0a561aed2","Type":"ContainerStarted","Data":"a8980341d2d2e99aa6147da97ee65d35798c4d56e812a53728fabcf0020c71f1"} Nov 28 11:58:53 crc kubenswrapper[5030]: I1128 11:58:53.421282 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" event={"ID":"69edcb28-13c3-4a70-ade7-3fd0a561aed2","Type":"ContainerStarted","Data":"d45877b34d0debb6978d395a5206ad6da6bcdd297f6c0addc22b3de1bfb358d3"} Nov 28 11:58:53 crc kubenswrapper[5030]: I1128 11:58:53.421977 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" Nov 28 11:58:53 crc kubenswrapper[5030]: I1128 11:58:53.433304 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" Nov 28 11:58:53 crc kubenswrapper[5030]: I1128 11:58:53.450025 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f5f858bdf-6dgtj" podStartSLOduration=4.449988389 podStartE2EDuration="4.449988389s" podCreationTimestamp="2025-11-28 11:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:58:53.448069996 +0000 UTC m=+351.389812679" watchObservedRunningTime="2025-11-28 11:58:53.449988389 +0000 UTC m=+351.391731102" Nov 28 11:59:03 crc kubenswrapper[5030]: I1128 11:59:03.202391 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:59:03 crc kubenswrapper[5030]: I1128 11:59:03.203286 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:59:09 crc kubenswrapper[5030]: I1128 11:59:09.410933 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g"] Nov 28 11:59:09 crc kubenswrapper[5030]: I1128 11:59:09.411762 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" podUID="7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a" containerName="route-controller-manager" containerID="cri-o://aedc73b9770c77ba062b6f4cb78b533148f3c6e3d2dc029ad236d1e6a0655c86" gracePeriod=30 Nov 28 11:59:09 crc kubenswrapper[5030]: I1128 11:59:09.546209 5030 generic.go:334] "Generic (PLEG): container finished" podID="7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a" containerID="aedc73b9770c77ba062b6f4cb78b533148f3c6e3d2dc029ad236d1e6a0655c86" exitCode=0 Nov 28 11:59:09 crc kubenswrapper[5030]: I1128 11:59:09.546265 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" event={"ID":"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a","Type":"ContainerDied","Data":"aedc73b9770c77ba062b6f4cb78b533148f3c6e3d2dc029ad236d1e6a0655c86"} Nov 28 11:59:09 crc kubenswrapper[5030]: I1128 11:59:09.987272 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.141070 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-config\") pod \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\" (UID: \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\") " Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.141217 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-serving-cert\") pod \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\" (UID: \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\") " Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.141289 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfgpk\" (UniqueName: \"kubernetes.io/projected/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-kube-api-access-nfgpk\") pod \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\" (UID: \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\") " Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.141316 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-client-ca\") pod \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\" (UID: \"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a\") " Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.142803 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-client-ca" (OuterVolumeSpecName: "client-ca") pod "7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a" (UID: "7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.143054 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-config" (OuterVolumeSpecName: "config") pod "7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a" (UID: "7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.149820 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a" (UID: "7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.150600 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-kube-api-access-nfgpk" (OuterVolumeSpecName: "kube-api-access-nfgpk") pod "7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a" (UID: "7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a"). InnerVolumeSpecName "kube-api-access-nfgpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.243305 5030 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-config\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.243362 5030 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.243383 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfgpk\" (UniqueName: \"kubernetes.io/projected/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-kube-api-access-nfgpk\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.243406 5030 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.555093 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" event={"ID":"7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a","Type":"ContainerDied","Data":"0066d3b036f70ef5c8af44ea19b5af9ef0ab8c863d9b16d1bfbe66534174a6db"} Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.555178 5030 scope.go:117] "RemoveContainer" containerID="aedc73b9770c77ba062b6f4cb78b533148f3c6e3d2dc029ad236d1e6a0655c86" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.555222 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.581325 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g"] Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.586548 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69777494bf-89c8g"] Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.718714 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v"] Nov 28 11:59:10 crc kubenswrapper[5030]: E1128 11:59:10.718961 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a" containerName="route-controller-manager" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.718976 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a" containerName="route-controller-manager" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.719103 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a" containerName="route-controller-manager" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.719545 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.722284 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.722285 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.722558 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.724744 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.725297 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.725557 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.751047 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz87h\" (UniqueName: \"kubernetes.io/projected/efbc0c98-d760-4c42-86a0-2519fdfde24f-kube-api-access-lz87h\") pod \"route-controller-manager-7489674f54-zdr9v\" (UID: \"efbc0c98-d760-4c42-86a0-2519fdfde24f\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.751099 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efbc0c98-d760-4c42-86a0-2519fdfde24f-client-ca\") pod \"route-controller-manager-7489674f54-zdr9v\" (UID: \"efbc0c98-d760-4c42-86a0-2519fdfde24f\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.751129 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbc0c98-d760-4c42-86a0-2519fdfde24f-config\") pod \"route-controller-manager-7489674f54-zdr9v\" (UID: \"efbc0c98-d760-4c42-86a0-2519fdfde24f\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.751150 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efbc0c98-d760-4c42-86a0-2519fdfde24f-serving-cert\") pod \"route-controller-manager-7489674f54-zdr9v\" (UID: \"efbc0c98-d760-4c42-86a0-2519fdfde24f\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.805360 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v"] Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.852218 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz87h\" (UniqueName: \"kubernetes.io/projected/efbc0c98-d760-4c42-86a0-2519fdfde24f-kube-api-access-lz87h\") pod \"route-controller-manager-7489674f54-zdr9v\" (UID: \"efbc0c98-d760-4c42-86a0-2519fdfde24f\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.852279 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efbc0c98-d760-4c42-86a0-2519fdfde24f-client-ca\") pod \"route-controller-manager-7489674f54-zdr9v\" (UID: \"efbc0c98-d760-4c42-86a0-2519fdfde24f\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.852313 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbc0c98-d760-4c42-86a0-2519fdfde24f-config\") pod \"route-controller-manager-7489674f54-zdr9v\" (UID: \"efbc0c98-d760-4c42-86a0-2519fdfde24f\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.852335 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efbc0c98-d760-4c42-86a0-2519fdfde24f-serving-cert\") pod \"route-controller-manager-7489674f54-zdr9v\" (UID: \"efbc0c98-d760-4c42-86a0-2519fdfde24f\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.853519 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efbc0c98-d760-4c42-86a0-2519fdfde24f-client-ca\") pod \"route-controller-manager-7489674f54-zdr9v\" (UID: \"efbc0c98-d760-4c42-86a0-2519fdfde24f\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.853807 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbc0c98-d760-4c42-86a0-2519fdfde24f-config\") pod \"route-controller-manager-7489674f54-zdr9v\" (UID: \"efbc0c98-d760-4c42-86a0-2519fdfde24f\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.863522 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efbc0c98-d760-4c42-86a0-2519fdfde24f-serving-cert\") pod \"route-controller-manager-7489674f54-zdr9v\" (UID: \"efbc0c98-d760-4c42-86a0-2519fdfde24f\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v" Nov 28 11:59:10 crc kubenswrapper[5030]: I1128 11:59:10.875609 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz87h\" (UniqueName: \"kubernetes.io/projected/efbc0c98-d760-4c42-86a0-2519fdfde24f-kube-api-access-lz87h\") pod \"route-controller-manager-7489674f54-zdr9v\" (UID: \"efbc0c98-d760-4c42-86a0-2519fdfde24f\") " pod="openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v" Nov 28 11:59:11 crc kubenswrapper[5030]: I1128 11:59:11.040774 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v" Nov 28 11:59:11 crc kubenswrapper[5030]: I1128 11:59:11.536316 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v"] Nov 28 11:59:11 crc kubenswrapper[5030]: I1128 11:59:11.568134 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v" event={"ID":"efbc0c98-d760-4c42-86a0-2519fdfde24f","Type":"ContainerStarted","Data":"6adc0df5f7cacd597a6b7ae09237792d70cfb50cc5ce258c3c8afcc2bcf2af21"} Nov 28 11:59:12 crc kubenswrapper[5030]: I1128 11:59:12.406112 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a" path="/var/lib/kubelet/pods/7b6ae70e-5622-4a90-8cb9-4f8078fc3a9a/volumes" Nov 28 11:59:12 crc kubenswrapper[5030]: I1128 11:59:12.577087 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v" event={"ID":"efbc0c98-d760-4c42-86a0-2519fdfde24f","Type":"ContainerStarted","Data":"3ce03c1f3c29751034c8579b4473f57fcce9ef2ed3e9af61107f274c4d302c57"} Nov 28 11:59:12 crc kubenswrapper[5030]: I1128 11:59:12.577490 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v" Nov 28 11:59:12 crc kubenswrapper[5030]: I1128 11:59:12.582953 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v" Nov 28 11:59:12 crc kubenswrapper[5030]: I1128 11:59:12.602891 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7489674f54-zdr9v" podStartSLOduration=3.602861862 podStartE2EDuration="3.602861862s" podCreationTimestamp="2025-11-28 11:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:59:12.596959372 +0000 UTC m=+370.538702075" watchObservedRunningTime="2025-11-28 11:59:12.602861862 +0000 UTC m=+370.544604545" Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.221630 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" podUID="0623247c-d46a-4e16-8731-cdd6d2f4a16a" containerName="registry" containerID="cri-o://73183920bd6fa19e29d2f466bc1cdbc5a3ab87d4c47f43c378252276ee0a5dbc" gracePeriod=30 Nov 28 11:59:17 crc kubenswrapper[5030]: E1128 11:59:17.313975 5030 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0623247c_d46a_4e16_8731_cdd6d2f4a16a.slice/crio-conmon-73183920bd6fa19e29d2f466bc1cdbc5a3ab87d4c47f43c378252276ee0a5dbc.scope\": RecentStats: unable to find data in memory cache]" Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.613601 5030 generic.go:334] "Generic (PLEG): container finished" podID="0623247c-d46a-4e16-8731-cdd6d2f4a16a" containerID="73183920bd6fa19e29d2f466bc1cdbc5a3ab87d4c47f43c378252276ee0a5dbc" exitCode=0 Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.613742 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" event={"ID":"0623247c-d46a-4e16-8731-cdd6d2f4a16a","Type":"ContainerDied","Data":"73183920bd6fa19e29d2f466bc1cdbc5a3ab87d4c47f43c378252276ee0a5dbc"} Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.797348 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.971246 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0623247c-d46a-4e16-8731-cdd6d2f4a16a-installation-pull-secrets\") pod \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.971386 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0623247c-d46a-4e16-8731-cdd6d2f4a16a-trusted-ca\") pod \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.971452 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0623247c-d46a-4e16-8731-cdd6d2f4a16a-registry-certificates\") pod \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.971564 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8rkt\" (UniqueName: \"kubernetes.io/projected/0623247c-d46a-4e16-8731-cdd6d2f4a16a-kube-api-access-p8rkt\") pod \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.971618 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0623247c-d46a-4e16-8731-cdd6d2f4a16a-ca-trust-extracted\") pod \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.972351 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0623247c-d46a-4e16-8731-cdd6d2f4a16a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "0623247c-d46a-4e16-8731-cdd6d2f4a16a" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.972444 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0623247c-d46a-4e16-8731-cdd6d2f4a16a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "0623247c-d46a-4e16-8731-cdd6d2f4a16a" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.973597 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0623247c-d46a-4e16-8731-cdd6d2f4a16a-registry-tls\") pod \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.973666 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0623247c-d46a-4e16-8731-cdd6d2f4a16a-bound-sa-token\") pod \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.973880 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\" (UID: \"0623247c-d46a-4e16-8731-cdd6d2f4a16a\") " Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.974206 5030 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0623247c-d46a-4e16-8731-cdd6d2f4a16a-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.974230 5030 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0623247c-d46a-4e16-8731-cdd6d2f4a16a-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.978108 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0623247c-d46a-4e16-8731-cdd6d2f4a16a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "0623247c-d46a-4e16-8731-cdd6d2f4a16a" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.985304 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0623247c-d46a-4e16-8731-cdd6d2f4a16a-kube-api-access-p8rkt" (OuterVolumeSpecName: "kube-api-access-p8rkt") pod "0623247c-d46a-4e16-8731-cdd6d2f4a16a" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a"). InnerVolumeSpecName "kube-api-access-p8rkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.986919 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "0623247c-d46a-4e16-8731-cdd6d2f4a16a" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.987377 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0623247c-d46a-4e16-8731-cdd6d2f4a16a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "0623247c-d46a-4e16-8731-cdd6d2f4a16a" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.987629 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0623247c-d46a-4e16-8731-cdd6d2f4a16a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "0623247c-d46a-4e16-8731-cdd6d2f4a16a" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:59:17 crc kubenswrapper[5030]: I1128 11:59:17.996376 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0623247c-d46a-4e16-8731-cdd6d2f4a16a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "0623247c-d46a-4e16-8731-cdd6d2f4a16a" (UID: "0623247c-d46a-4e16-8731-cdd6d2f4a16a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:59:18 crc kubenswrapper[5030]: I1128 11:59:18.076051 5030 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0623247c-d46a-4e16-8731-cdd6d2f4a16a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:18 crc kubenswrapper[5030]: I1128 11:59:18.076093 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8rkt\" (UniqueName: \"kubernetes.io/projected/0623247c-d46a-4e16-8731-cdd6d2f4a16a-kube-api-access-p8rkt\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:18 crc kubenswrapper[5030]: I1128 11:59:18.076105 5030 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0623247c-d46a-4e16-8731-cdd6d2f4a16a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:18 crc kubenswrapper[5030]: I1128 11:59:18.076117 5030 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0623247c-d46a-4e16-8731-cdd6d2f4a16a-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:18 crc kubenswrapper[5030]: I1128 11:59:18.076131 5030 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0623247c-d46a-4e16-8731-cdd6d2f4a16a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:18 crc kubenswrapper[5030]: I1128 11:59:18.625319 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" event={"ID":"0623247c-d46a-4e16-8731-cdd6d2f4a16a","Type":"ContainerDied","Data":"1de33d8736c33cf584e69d765e0ce7d955aa6c4789344f35f43a3bc15ef2362e"} Nov 28 11:59:18 crc kubenswrapper[5030]: I1128 11:59:18.625380 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8vhfh" Nov 28 11:59:18 crc kubenswrapper[5030]: I1128 11:59:18.625400 5030 scope.go:117] "RemoveContainer" containerID="73183920bd6fa19e29d2f466bc1cdbc5a3ab87d4c47f43c378252276ee0a5dbc" Nov 28 11:59:18 crc kubenswrapper[5030]: I1128 11:59:18.657492 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8vhfh"] Nov 28 11:59:18 crc kubenswrapper[5030]: I1128 11:59:18.665759 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8vhfh"] Nov 28 11:59:20 crc kubenswrapper[5030]: I1128 11:59:20.406689 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0623247c-d46a-4e16-8731-cdd6d2f4a16a" path="/var/lib/kubelet/pods/0623247c-d46a-4e16-8731-cdd6d2f4a16a/volumes" Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.334714 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lq47d"] Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.335915 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lq47d" podUID="e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c" containerName="registry-server" containerID="cri-o://a58992aa9e0f559a81c3262b99f07d83e5f62bb73fd821b21a26bdf88eaade9e" gracePeriod=30 Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.363646 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7c95t"] Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.363949 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7c95t" podUID="e626568d-b431-46f4-ad61-429b99eec2a9" containerName="registry-server" containerID="cri-o://588f96aada3fbbc2ea0a1bac8ded0114644c6b301921933803b703f8ddf2bc37" gracePeriod=30 Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.369178 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-frtvx"] Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.369506 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" podUID="235ffe06-65ea-4f0e-90b8-1b9ed56df5bf" containerName="marketplace-operator" containerID="cri-o://c6e441200129c812d062ae3d3eaede9d5ab531c39453c4d0a60ca97addcb2d9b" gracePeriod=30 Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.377905 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5md7x"] Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.378214 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5md7x" podUID="778bafea-1fde-45d3-aa84-612f3cbe06ba" containerName="registry-server" containerID="cri-o://0fa74c640893f21273ad2607fe4babdb3de7fe666947d0dd386cca0d34c74679" gracePeriod=30 Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.387620 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ntjwt"] Nov 28 11:59:32 crc kubenswrapper[5030]: E1128 11:59:32.387932 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0623247c-d46a-4e16-8731-cdd6d2f4a16a" containerName="registry" Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.387949 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="0623247c-d46a-4e16-8731-cdd6d2f4a16a" containerName="registry" Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.388105 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="0623247c-d46a-4e16-8731-cdd6d2f4a16a" containerName="registry" Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.388635 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ntjwt" Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.404796 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-spqtx"] Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.405054 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-spqtx" podUID="b790b1a3-16d7-498a-8f14-36e52122ad9b" containerName="registry-server" containerID="cri-o://4a92fae74d12cd1365125dbca346d6c2698fb9dee32971b166f665c033b3600c" gracePeriod=30 Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.406514 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ntjwt"] Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.501219 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/da571a9b-f5ae-4bcf-b98c-f92299206a54-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ntjwt\" (UID: \"da571a9b-f5ae-4bcf-b98c-f92299206a54\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntjwt" Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.501273 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da571a9b-f5ae-4bcf-b98c-f92299206a54-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ntjwt\" (UID: \"da571a9b-f5ae-4bcf-b98c-f92299206a54\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntjwt" Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.501331 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj82d\" (UniqueName: \"kubernetes.io/projected/da571a9b-f5ae-4bcf-b98c-f92299206a54-kube-api-access-nj82d\") pod \"marketplace-operator-79b997595-ntjwt\" (UID: \"da571a9b-f5ae-4bcf-b98c-f92299206a54\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntjwt" Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.603224 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj82d\" (UniqueName: \"kubernetes.io/projected/da571a9b-f5ae-4bcf-b98c-f92299206a54-kube-api-access-nj82d\") pod \"marketplace-operator-79b997595-ntjwt\" (UID: \"da571a9b-f5ae-4bcf-b98c-f92299206a54\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntjwt" Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.603296 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/da571a9b-f5ae-4bcf-b98c-f92299206a54-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ntjwt\" (UID: \"da571a9b-f5ae-4bcf-b98c-f92299206a54\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntjwt" Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.603318 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da571a9b-f5ae-4bcf-b98c-f92299206a54-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ntjwt\" (UID: \"da571a9b-f5ae-4bcf-b98c-f92299206a54\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntjwt" Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.604380 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da571a9b-f5ae-4bcf-b98c-f92299206a54-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ntjwt\" (UID: \"da571a9b-f5ae-4bcf-b98c-f92299206a54\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntjwt" Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.611074 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/da571a9b-f5ae-4bcf-b98c-f92299206a54-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ntjwt\" (UID: \"da571a9b-f5ae-4bcf-b98c-f92299206a54\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntjwt" Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.620766 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj82d\" (UniqueName: \"kubernetes.io/projected/da571a9b-f5ae-4bcf-b98c-f92299206a54-kube-api-access-nj82d\") pod \"marketplace-operator-79b997595-ntjwt\" (UID: \"da571a9b-f5ae-4bcf-b98c-f92299206a54\") " pod="openshift-marketplace/marketplace-operator-79b997595-ntjwt" Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.723991 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ntjwt" Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.737642 5030 generic.go:334] "Generic (PLEG): container finished" podID="b790b1a3-16d7-498a-8f14-36e52122ad9b" containerID="4a92fae74d12cd1365125dbca346d6c2698fb9dee32971b166f665c033b3600c" exitCode=0 Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.737712 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spqtx" event={"ID":"b790b1a3-16d7-498a-8f14-36e52122ad9b","Type":"ContainerDied","Data":"4a92fae74d12cd1365125dbca346d6c2698fb9dee32971b166f665c033b3600c"} Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.744185 5030 generic.go:334] "Generic (PLEG): container finished" podID="e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c" containerID="a58992aa9e0f559a81c3262b99f07d83e5f62bb73fd821b21a26bdf88eaade9e" exitCode=0 Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.744284 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq47d" event={"ID":"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c","Type":"ContainerDied","Data":"a58992aa9e0f559a81c3262b99f07d83e5f62bb73fd821b21a26bdf88eaade9e"} Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.744573 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq47d" event={"ID":"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c","Type":"ContainerDied","Data":"9a024ba6a11194360bb164885db77a7534f05e3fda5de33a9ada2f82d1f3a97e"} Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.744591 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a024ba6a11194360bb164885db77a7534f05e3fda5de33a9ada2f82d1f3a97e" Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.747227 5030 generic.go:334] "Generic (PLEG): container finished" podID="e626568d-b431-46f4-ad61-429b99eec2a9" containerID="588f96aada3fbbc2ea0a1bac8ded0114644c6b301921933803b703f8ddf2bc37" exitCode=0 Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.747257 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c95t" event={"ID":"e626568d-b431-46f4-ad61-429b99eec2a9","Type":"ContainerDied","Data":"588f96aada3fbbc2ea0a1bac8ded0114644c6b301921933803b703f8ddf2bc37"} Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.748863 5030 generic.go:334] "Generic (PLEG): container finished" podID="235ffe06-65ea-4f0e-90b8-1b9ed56df5bf" containerID="c6e441200129c812d062ae3d3eaede9d5ab531c39453c4d0a60ca97addcb2d9b" exitCode=0 Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.748890 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" event={"ID":"235ffe06-65ea-4f0e-90b8-1b9ed56df5bf","Type":"ContainerDied","Data":"c6e441200129c812d062ae3d3eaede9d5ab531c39453c4d0a60ca97addcb2d9b"} Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.752945 5030 generic.go:334] "Generic (PLEG): container finished" podID="778bafea-1fde-45d3-aa84-612f3cbe06ba" containerID="0fa74c640893f21273ad2607fe4babdb3de7fe666947d0dd386cca0d34c74679" exitCode=0 Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.752979 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5md7x" event={"ID":"778bafea-1fde-45d3-aa84-612f3cbe06ba","Type":"ContainerDied","Data":"0fa74c640893f21273ad2607fe4babdb3de7fe666947d0dd386cca0d34c74679"} Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.836032 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq47d" Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.907450 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8k5q\" (UniqueName: \"kubernetes.io/projected/e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c-kube-api-access-n8k5q\") pod \"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c\" (UID: \"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c\") " Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.907526 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c-utilities\") pod \"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c\" (UID: \"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c\") " Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.907567 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c-catalog-content\") pod \"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c\" (UID: \"e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c\") " Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.909692 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c-utilities" (OuterVolumeSpecName: "utilities") pod "e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c" (UID: "e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.912804 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c-kube-api-access-n8k5q" (OuterVolumeSpecName: "kube-api-access-n8k5q") pod "e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c" (UID: "e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c"). InnerVolumeSpecName "kube-api-access-n8k5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:59:32 crc kubenswrapper[5030]: I1128 11:59:32.963031 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c" (UID: "e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.008176 5030 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.008209 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8k5q\" (UniqueName: \"kubernetes.io/projected/e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c-kube-api-access-n8k5q\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.008226 5030 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.120166 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ntjwt"] Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.201609 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.201676 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.392750 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.471811 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spqtx" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.478598 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7c95t" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.512927 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/235ffe06-65ea-4f0e-90b8-1b9ed56df5bf-marketplace-trusted-ca\") pod \"235ffe06-65ea-4f0e-90b8-1b9ed56df5bf\" (UID: \"235ffe06-65ea-4f0e-90b8-1b9ed56df5bf\") " Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.513009 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rbg9\" (UniqueName: \"kubernetes.io/projected/235ffe06-65ea-4f0e-90b8-1b9ed56df5bf-kube-api-access-8rbg9\") pod \"235ffe06-65ea-4f0e-90b8-1b9ed56df5bf\" (UID: \"235ffe06-65ea-4f0e-90b8-1b9ed56df5bf\") " Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.513068 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/235ffe06-65ea-4f0e-90b8-1b9ed56df5bf-marketplace-operator-metrics\") pod \"235ffe06-65ea-4f0e-90b8-1b9ed56df5bf\" (UID: \"235ffe06-65ea-4f0e-90b8-1b9ed56df5bf\") " Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.514345 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/235ffe06-65ea-4f0e-90b8-1b9ed56df5bf-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "235ffe06-65ea-4f0e-90b8-1b9ed56df5bf" (UID: "235ffe06-65ea-4f0e-90b8-1b9ed56df5bf"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.521662 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/235ffe06-65ea-4f0e-90b8-1b9ed56df5bf-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "235ffe06-65ea-4f0e-90b8-1b9ed56df5bf" (UID: "235ffe06-65ea-4f0e-90b8-1b9ed56df5bf"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.522868 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/235ffe06-65ea-4f0e-90b8-1b9ed56df5bf-kube-api-access-8rbg9" (OuterVolumeSpecName: "kube-api-access-8rbg9") pod "235ffe06-65ea-4f0e-90b8-1b9ed56df5bf" (UID: "235ffe06-65ea-4f0e-90b8-1b9ed56df5bf"). InnerVolumeSpecName "kube-api-access-8rbg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.556246 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5md7x" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.614350 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e626568d-b431-46f4-ad61-429b99eec2a9-catalog-content\") pod \"e626568d-b431-46f4-ad61-429b99eec2a9\" (UID: \"e626568d-b431-46f4-ad61-429b99eec2a9\") " Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.614420 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b790b1a3-16d7-498a-8f14-36e52122ad9b-utilities\") pod \"b790b1a3-16d7-498a-8f14-36e52122ad9b\" (UID: \"b790b1a3-16d7-498a-8f14-36e52122ad9b\") " Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.614455 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jszjr\" (UniqueName: \"kubernetes.io/projected/e626568d-b431-46f4-ad61-429b99eec2a9-kube-api-access-jszjr\") pod \"e626568d-b431-46f4-ad61-429b99eec2a9\" (UID: \"e626568d-b431-46f4-ad61-429b99eec2a9\") " Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.614498 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vp5l\" (UniqueName: \"kubernetes.io/projected/b790b1a3-16d7-498a-8f14-36e52122ad9b-kube-api-access-5vp5l\") pod \"b790b1a3-16d7-498a-8f14-36e52122ad9b\" (UID: \"b790b1a3-16d7-498a-8f14-36e52122ad9b\") " Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.614531 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b790b1a3-16d7-498a-8f14-36e52122ad9b-catalog-content\") pod \"b790b1a3-16d7-498a-8f14-36e52122ad9b\" (UID: \"b790b1a3-16d7-498a-8f14-36e52122ad9b\") " Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.615001 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e626568d-b431-46f4-ad61-429b99eec2a9-utilities\") pod \"e626568d-b431-46f4-ad61-429b99eec2a9\" (UID: \"e626568d-b431-46f4-ad61-429b99eec2a9\") " Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.615213 5030 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/235ffe06-65ea-4f0e-90b8-1b9ed56df5bf-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.615228 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rbg9\" (UniqueName: \"kubernetes.io/projected/235ffe06-65ea-4f0e-90b8-1b9ed56df5bf-kube-api-access-8rbg9\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.615239 5030 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/235ffe06-65ea-4f0e-90b8-1b9ed56df5bf-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.616021 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e626568d-b431-46f4-ad61-429b99eec2a9-utilities" (OuterVolumeSpecName: "utilities") pod "e626568d-b431-46f4-ad61-429b99eec2a9" (UID: "e626568d-b431-46f4-ad61-429b99eec2a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.620093 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b790b1a3-16d7-498a-8f14-36e52122ad9b-utilities" (OuterVolumeSpecName: "utilities") pod "b790b1a3-16d7-498a-8f14-36e52122ad9b" (UID: "b790b1a3-16d7-498a-8f14-36e52122ad9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.622339 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b790b1a3-16d7-498a-8f14-36e52122ad9b-kube-api-access-5vp5l" (OuterVolumeSpecName: "kube-api-access-5vp5l") pod "b790b1a3-16d7-498a-8f14-36e52122ad9b" (UID: "b790b1a3-16d7-498a-8f14-36e52122ad9b"). InnerVolumeSpecName "kube-api-access-5vp5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.622941 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e626568d-b431-46f4-ad61-429b99eec2a9-kube-api-access-jszjr" (OuterVolumeSpecName: "kube-api-access-jszjr") pod "e626568d-b431-46f4-ad61-429b99eec2a9" (UID: "e626568d-b431-46f4-ad61-429b99eec2a9"). InnerVolumeSpecName "kube-api-access-jszjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.674917 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e626568d-b431-46f4-ad61-429b99eec2a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e626568d-b431-46f4-ad61-429b99eec2a9" (UID: "e626568d-b431-46f4-ad61-429b99eec2a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.716312 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778bafea-1fde-45d3-aa84-612f3cbe06ba-utilities\") pod \"778bafea-1fde-45d3-aa84-612f3cbe06ba\" (UID: \"778bafea-1fde-45d3-aa84-612f3cbe06ba\") " Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.716705 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778bafea-1fde-45d3-aa84-612f3cbe06ba-catalog-content\") pod \"778bafea-1fde-45d3-aa84-612f3cbe06ba\" (UID: \"778bafea-1fde-45d3-aa84-612f3cbe06ba\") " Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.716803 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmphm\" (UniqueName: \"kubernetes.io/projected/778bafea-1fde-45d3-aa84-612f3cbe06ba-kube-api-access-fmphm\") pod \"778bafea-1fde-45d3-aa84-612f3cbe06ba\" (UID: \"778bafea-1fde-45d3-aa84-612f3cbe06ba\") " Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.717087 5030 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e626568d-b431-46f4-ad61-429b99eec2a9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.717172 5030 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b790b1a3-16d7-498a-8f14-36e52122ad9b-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.717257 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jszjr\" (UniqueName: \"kubernetes.io/projected/e626568d-b431-46f4-ad61-429b99eec2a9-kube-api-access-jszjr\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.717462 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vp5l\" (UniqueName: \"kubernetes.io/projected/b790b1a3-16d7-498a-8f14-36e52122ad9b-kube-api-access-5vp5l\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.717629 5030 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e626568d-b431-46f4-ad61-429b99eec2a9-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.717324 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/778bafea-1fde-45d3-aa84-612f3cbe06ba-utilities" (OuterVolumeSpecName: "utilities") pod "778bafea-1fde-45d3-aa84-612f3cbe06ba" (UID: "778bafea-1fde-45d3-aa84-612f3cbe06ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.719579 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/778bafea-1fde-45d3-aa84-612f3cbe06ba-kube-api-access-fmphm" (OuterVolumeSpecName: "kube-api-access-fmphm") pod "778bafea-1fde-45d3-aa84-612f3cbe06ba" (UID: "778bafea-1fde-45d3-aa84-612f3cbe06ba"). InnerVolumeSpecName "kube-api-access-fmphm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.720580 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b790b1a3-16d7-498a-8f14-36e52122ad9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b790b1a3-16d7-498a-8f14-36e52122ad9b" (UID: "b790b1a3-16d7-498a-8f14-36e52122ad9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.735179 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/778bafea-1fde-45d3-aa84-612f3cbe06ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "778bafea-1fde-45d3-aa84-612f3cbe06ba" (UID: "778bafea-1fde-45d3-aa84-612f3cbe06ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.759991 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spqtx" event={"ID":"b790b1a3-16d7-498a-8f14-36e52122ad9b","Type":"ContainerDied","Data":"872c61d63d51b04903960e26b1765b601be06d6ca42fa7db82ab56ec0952891f"} Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.760032 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spqtx" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.760080 5030 scope.go:117] "RemoveContainer" containerID="4a92fae74d12cd1365125dbca346d6c2698fb9dee32971b166f665c033b3600c" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.762980 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7c95t" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.763018 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c95t" event={"ID":"e626568d-b431-46f4-ad61-429b99eec2a9","Type":"ContainerDied","Data":"14ed36eee0980d5e624cd37c6ce7192979fbf31329e0369f80c2fa7846b7c27b"} Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.769055 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" event={"ID":"235ffe06-65ea-4f0e-90b8-1b9ed56df5bf","Type":"ContainerDied","Data":"49e117dceac2c0d3942bee81348ff6d53b526619adba0ac0d4b5c8347b651718"} Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.769123 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-frtvx" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.774824 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5md7x" event={"ID":"778bafea-1fde-45d3-aa84-612f3cbe06ba","Type":"ContainerDied","Data":"0c77aab3c0595c371ec2280e2a643c2993aa2133917cbf46cb3275a4bd133e01"} Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.774854 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5md7x" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.775453 5030 scope.go:117] "RemoveContainer" containerID="d227e530adb92f3bc5ffb7208dffa450879c0f7c00920e026e0d5e92783c493f" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.777544 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq47d" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.777637 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ntjwt" event={"ID":"da571a9b-f5ae-4bcf-b98c-f92299206a54","Type":"ContainerStarted","Data":"5005649f5c0cfe02f7c4551cb42c2b92f8380f83b9251d30cb1807636dcb1b68"} Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.798747 5030 scope.go:117] "RemoveContainer" containerID="f4fffeea4ac7db9753f2df4255462868fd5a2f8192fc10648bf7284203223a94" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.810818 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-spqtx"] Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.813940 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-spqtx"] Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.819855 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmphm\" (UniqueName: \"kubernetes.io/projected/778bafea-1fde-45d3-aa84-612f3cbe06ba-kube-api-access-fmphm\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.819897 5030 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778bafea-1fde-45d3-aa84-612f3cbe06ba-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.819914 5030 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b790b1a3-16d7-498a-8f14-36e52122ad9b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.819927 5030 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778bafea-1fde-45d3-aa84-612f3cbe06ba-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.825702 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7c95t"] Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.827910 5030 scope.go:117] "RemoveContainer" containerID="588f96aada3fbbc2ea0a1bac8ded0114644c6b301921933803b703f8ddf2bc37" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.829578 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7c95t"] Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.843885 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5md7x"] Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.852018 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5md7x"] Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.859363 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-frtvx"] Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.866498 5030 scope.go:117] "RemoveContainer" containerID="89e55122214dcaf2a0c0a9bb74dbcfa4238e00af8112f5d5aff4afe931cbe606" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.866642 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-frtvx"] Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.876012 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lq47d"] Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.882260 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lq47d"] Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.890510 5030 scope.go:117] "RemoveContainer" containerID="b3837ce42dec3d4b3b048299e2dc095f9f7cc0f4ba6b6655c3679b67765694b9" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.905268 5030 scope.go:117] "RemoveContainer" containerID="c6e441200129c812d062ae3d3eaede9d5ab531c39453c4d0a60ca97addcb2d9b" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.928232 5030 scope.go:117] "RemoveContainer" containerID="0fa74c640893f21273ad2607fe4babdb3de7fe666947d0dd386cca0d34c74679" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.947271 5030 scope.go:117] "RemoveContainer" containerID="5e1da0f367f922c602066ceaa3bc406e2f37c5f2ab956dac28e4d8defe0dda49" Nov 28 11:59:33 crc kubenswrapper[5030]: I1128 11:59:33.964265 5030 scope.go:117] "RemoveContainer" containerID="305c1783fc5bcbc7b5d2fb81a2508406eef5848a167964637b54ff482cc6992c" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.400140 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="235ffe06-65ea-4f0e-90b8-1b9ed56df5bf" path="/var/lib/kubelet/pods/235ffe06-65ea-4f0e-90b8-1b9ed56df5bf/volumes" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.400901 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="778bafea-1fde-45d3-aa84-612f3cbe06ba" path="/var/lib/kubelet/pods/778bafea-1fde-45d3-aa84-612f3cbe06ba/volumes" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.402743 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b790b1a3-16d7-498a-8f14-36e52122ad9b" path="/var/lib/kubelet/pods/b790b1a3-16d7-498a-8f14-36e52122ad9b/volumes" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.404924 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c" path="/var/lib/kubelet/pods/e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c/volumes" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.406160 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e626568d-b431-46f4-ad61-429b99eec2a9" path="/var/lib/kubelet/pods/e626568d-b431-46f4-ad61-429b99eec2a9/volumes" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.572120 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pmp59"] Nov 28 11:59:34 crc kubenswrapper[5030]: E1128 11:59:34.572890 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e626568d-b431-46f4-ad61-429b99eec2a9" containerName="registry-server" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.572922 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="e626568d-b431-46f4-ad61-429b99eec2a9" containerName="registry-server" Nov 28 11:59:34 crc kubenswrapper[5030]: E1128 11:59:34.572945 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c" containerName="extract-utilities" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.572956 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c" containerName="extract-utilities" Nov 28 11:59:34 crc kubenswrapper[5030]: E1128 11:59:34.572974 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778bafea-1fde-45d3-aa84-612f3cbe06ba" containerName="extract-content" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.572988 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="778bafea-1fde-45d3-aa84-612f3cbe06ba" containerName="extract-content" Nov 28 11:59:34 crc kubenswrapper[5030]: E1128 11:59:34.573000 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778bafea-1fde-45d3-aa84-612f3cbe06ba" containerName="registry-server" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.573012 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="778bafea-1fde-45d3-aa84-612f3cbe06ba" containerName="registry-server" Nov 28 11:59:34 crc kubenswrapper[5030]: E1128 11:59:34.573023 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b790b1a3-16d7-498a-8f14-36e52122ad9b" containerName="registry-server" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.573033 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="b790b1a3-16d7-498a-8f14-36e52122ad9b" containerName="registry-server" Nov 28 11:59:34 crc kubenswrapper[5030]: E1128 11:59:34.573046 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235ffe06-65ea-4f0e-90b8-1b9ed56df5bf" containerName="marketplace-operator" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.573055 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="235ffe06-65ea-4f0e-90b8-1b9ed56df5bf" containerName="marketplace-operator" Nov 28 11:59:34 crc kubenswrapper[5030]: E1128 11:59:34.573071 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b790b1a3-16d7-498a-8f14-36e52122ad9b" containerName="extract-content" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.573082 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="b790b1a3-16d7-498a-8f14-36e52122ad9b" containerName="extract-content" Nov 28 11:59:34 crc kubenswrapper[5030]: E1128 11:59:34.573095 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e626568d-b431-46f4-ad61-429b99eec2a9" containerName="extract-content" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.573106 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="e626568d-b431-46f4-ad61-429b99eec2a9" containerName="extract-content" Nov 28 11:59:34 crc kubenswrapper[5030]: E1128 11:59:34.573126 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c" containerName="extract-content" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.573137 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c" containerName="extract-content" Nov 28 11:59:34 crc kubenswrapper[5030]: E1128 11:59:34.573174 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778bafea-1fde-45d3-aa84-612f3cbe06ba" containerName="extract-utilities" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.573187 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="778bafea-1fde-45d3-aa84-612f3cbe06ba" containerName="extract-utilities" Nov 28 11:59:34 crc kubenswrapper[5030]: E1128 11:59:34.573219 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b790b1a3-16d7-498a-8f14-36e52122ad9b" containerName="extract-utilities" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.573232 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="b790b1a3-16d7-498a-8f14-36e52122ad9b" containerName="extract-utilities" Nov 28 11:59:34 crc kubenswrapper[5030]: E1128 11:59:34.573246 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c" containerName="registry-server" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.573258 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c" containerName="registry-server" Nov 28 11:59:34 crc kubenswrapper[5030]: E1128 11:59:34.573273 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e626568d-b431-46f4-ad61-429b99eec2a9" containerName="extract-utilities" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.573285 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="e626568d-b431-46f4-ad61-429b99eec2a9" containerName="extract-utilities" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.573439 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="235ffe06-65ea-4f0e-90b8-1b9ed56df5bf" containerName="marketplace-operator" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.573459 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1cfd735-7a89-4c9e-ace8-2dcb35cfed9c" containerName="registry-server" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.573509 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="778bafea-1fde-45d3-aa84-612f3cbe06ba" containerName="registry-server" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.573527 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="e626568d-b431-46f4-ad61-429b99eec2a9" containerName="registry-server" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.573542 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="b790b1a3-16d7-498a-8f14-36e52122ad9b" containerName="registry-server" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.574772 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmp59" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.577786 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.588166 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmp59"] Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.632081 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57j2z\" (UniqueName: \"kubernetes.io/projected/4ffcaa37-8853-409e-aeff-52278c6f2028-kube-api-access-57j2z\") pod \"redhat-marketplace-pmp59\" (UID: \"4ffcaa37-8853-409e-aeff-52278c6f2028\") " pod="openshift-marketplace/redhat-marketplace-pmp59" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.632158 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ffcaa37-8853-409e-aeff-52278c6f2028-catalog-content\") pod \"redhat-marketplace-pmp59\" (UID: \"4ffcaa37-8853-409e-aeff-52278c6f2028\") " pod="openshift-marketplace/redhat-marketplace-pmp59" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.632199 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ffcaa37-8853-409e-aeff-52278c6f2028-utilities\") pod \"redhat-marketplace-pmp59\" (UID: \"4ffcaa37-8853-409e-aeff-52278c6f2028\") " pod="openshift-marketplace/redhat-marketplace-pmp59" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.733188 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57j2z\" (UniqueName: \"kubernetes.io/projected/4ffcaa37-8853-409e-aeff-52278c6f2028-kube-api-access-57j2z\") pod \"redhat-marketplace-pmp59\" (UID: \"4ffcaa37-8853-409e-aeff-52278c6f2028\") " pod="openshift-marketplace/redhat-marketplace-pmp59" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.733276 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ffcaa37-8853-409e-aeff-52278c6f2028-catalog-content\") pod \"redhat-marketplace-pmp59\" (UID: \"4ffcaa37-8853-409e-aeff-52278c6f2028\") " pod="openshift-marketplace/redhat-marketplace-pmp59" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.733330 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ffcaa37-8853-409e-aeff-52278c6f2028-utilities\") pod \"redhat-marketplace-pmp59\" (UID: \"4ffcaa37-8853-409e-aeff-52278c6f2028\") " pod="openshift-marketplace/redhat-marketplace-pmp59" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.733891 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ffcaa37-8853-409e-aeff-52278c6f2028-utilities\") pod \"redhat-marketplace-pmp59\" (UID: \"4ffcaa37-8853-409e-aeff-52278c6f2028\") " pod="openshift-marketplace/redhat-marketplace-pmp59" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.734045 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ffcaa37-8853-409e-aeff-52278c6f2028-catalog-content\") pod \"redhat-marketplace-pmp59\" (UID: \"4ffcaa37-8853-409e-aeff-52278c6f2028\") " pod="openshift-marketplace/redhat-marketplace-pmp59" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.749316 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ssp9x"] Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.750300 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ssp9x" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.753599 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.766696 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ssp9x"] Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.771047 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57j2z\" (UniqueName: \"kubernetes.io/projected/4ffcaa37-8853-409e-aeff-52278c6f2028-kube-api-access-57j2z\") pod \"redhat-marketplace-pmp59\" (UID: \"4ffcaa37-8853-409e-aeff-52278c6f2028\") " pod="openshift-marketplace/redhat-marketplace-pmp59" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.805236 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ntjwt" event={"ID":"da571a9b-f5ae-4bcf-b98c-f92299206a54","Type":"ContainerStarted","Data":"0b5b80d26f5227434698846bf6136b475049607722a26de683cdee7136f8e630"} Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.806521 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ntjwt" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.813448 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ntjwt" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.827058 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ntjwt" podStartSLOduration=2.8270380619999997 podStartE2EDuration="2.827038062s" podCreationTimestamp="2025-11-28 11:59:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 11:59:34.822187088 +0000 UTC m=+392.763929771" watchObservedRunningTime="2025-11-28 11:59:34.827038062 +0000 UTC m=+392.768780745" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.933558 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmp59" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.935983 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/951b2dc6-7d8d-4f04-8c86-572af9af6000-utilities\") pod \"redhat-operators-ssp9x\" (UID: \"951b2dc6-7d8d-4f04-8c86-572af9af6000\") " pod="openshift-marketplace/redhat-operators-ssp9x" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.936063 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfrm4\" (UniqueName: \"kubernetes.io/projected/951b2dc6-7d8d-4f04-8c86-572af9af6000-kube-api-access-zfrm4\") pod \"redhat-operators-ssp9x\" (UID: \"951b2dc6-7d8d-4f04-8c86-572af9af6000\") " pod="openshift-marketplace/redhat-operators-ssp9x" Nov 28 11:59:34 crc kubenswrapper[5030]: I1128 11:59:34.936208 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/951b2dc6-7d8d-4f04-8c86-572af9af6000-catalog-content\") pod \"redhat-operators-ssp9x\" (UID: \"951b2dc6-7d8d-4f04-8c86-572af9af6000\") " pod="openshift-marketplace/redhat-operators-ssp9x" Nov 28 11:59:35 crc kubenswrapper[5030]: I1128 11:59:35.038351 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/951b2dc6-7d8d-4f04-8c86-572af9af6000-catalog-content\") pod \"redhat-operators-ssp9x\" (UID: \"951b2dc6-7d8d-4f04-8c86-572af9af6000\") " pod="openshift-marketplace/redhat-operators-ssp9x" Nov 28 11:59:35 crc kubenswrapper[5030]: I1128 11:59:35.038780 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/951b2dc6-7d8d-4f04-8c86-572af9af6000-utilities\") pod \"redhat-operators-ssp9x\" (UID: \"951b2dc6-7d8d-4f04-8c86-572af9af6000\") " pod="openshift-marketplace/redhat-operators-ssp9x" Nov 28 11:59:35 crc kubenswrapper[5030]: I1128 11:59:35.038820 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfrm4\" (UniqueName: \"kubernetes.io/projected/951b2dc6-7d8d-4f04-8c86-572af9af6000-kube-api-access-zfrm4\") pod \"redhat-operators-ssp9x\" (UID: \"951b2dc6-7d8d-4f04-8c86-572af9af6000\") " pod="openshift-marketplace/redhat-operators-ssp9x" Nov 28 11:59:35 crc kubenswrapper[5030]: I1128 11:59:35.039744 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/951b2dc6-7d8d-4f04-8c86-572af9af6000-catalog-content\") pod \"redhat-operators-ssp9x\" (UID: \"951b2dc6-7d8d-4f04-8c86-572af9af6000\") " pod="openshift-marketplace/redhat-operators-ssp9x" Nov 28 11:59:35 crc kubenswrapper[5030]: I1128 11:59:35.039753 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/951b2dc6-7d8d-4f04-8c86-572af9af6000-utilities\") pod \"redhat-operators-ssp9x\" (UID: \"951b2dc6-7d8d-4f04-8c86-572af9af6000\") " pod="openshift-marketplace/redhat-operators-ssp9x" Nov 28 11:59:35 crc kubenswrapper[5030]: I1128 11:59:35.067413 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfrm4\" (UniqueName: \"kubernetes.io/projected/951b2dc6-7d8d-4f04-8c86-572af9af6000-kube-api-access-zfrm4\") pod \"redhat-operators-ssp9x\" (UID: \"951b2dc6-7d8d-4f04-8c86-572af9af6000\") " pod="openshift-marketplace/redhat-operators-ssp9x" Nov 28 11:59:35 crc kubenswrapper[5030]: I1128 11:59:35.071511 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ssp9x" Nov 28 11:59:35 crc kubenswrapper[5030]: I1128 11:59:35.335838 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmp59"] Nov 28 11:59:35 crc kubenswrapper[5030]: I1128 11:59:35.465320 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ssp9x"] Nov 28 11:59:35 crc kubenswrapper[5030]: I1128 11:59:35.821053 5030 generic.go:334] "Generic (PLEG): container finished" podID="4ffcaa37-8853-409e-aeff-52278c6f2028" containerID="1950e01aeaefe00e39c4d4cca6cda9bdffbbc0fc9d63c56ce6dd17fee2a00fc7" exitCode=0 Nov 28 11:59:35 crc kubenswrapper[5030]: I1128 11:59:35.821152 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmp59" event={"ID":"4ffcaa37-8853-409e-aeff-52278c6f2028","Type":"ContainerDied","Data":"1950e01aeaefe00e39c4d4cca6cda9bdffbbc0fc9d63c56ce6dd17fee2a00fc7"} Nov 28 11:59:35 crc kubenswrapper[5030]: I1128 11:59:35.821877 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmp59" event={"ID":"4ffcaa37-8853-409e-aeff-52278c6f2028","Type":"ContainerStarted","Data":"e13a7686f958119904075817c44942163bb1e7d6bc20746ccb8d4ab11f4e5b5e"} Nov 28 11:59:35 crc kubenswrapper[5030]: I1128 11:59:35.825597 5030 generic.go:334] "Generic (PLEG): container finished" podID="951b2dc6-7d8d-4f04-8c86-572af9af6000" containerID="8803e64be4a423346ef2132e818e82c3b4740e96911517433211df7cc59e1b19" exitCode=0 Nov 28 11:59:35 crc kubenswrapper[5030]: I1128 11:59:35.825733 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ssp9x" event={"ID":"951b2dc6-7d8d-4f04-8c86-572af9af6000","Type":"ContainerDied","Data":"8803e64be4a423346ef2132e818e82c3b4740e96911517433211df7cc59e1b19"} Nov 28 11:59:35 crc kubenswrapper[5030]: I1128 11:59:35.825799 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ssp9x" event={"ID":"951b2dc6-7d8d-4f04-8c86-572af9af6000","Type":"ContainerStarted","Data":"623b59f2146641e056a1d65dfade490ca7187a48cee9b1039b98aff612f34240"} Nov 28 11:59:36 crc kubenswrapper[5030]: I1128 11:59:36.834814 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ssp9x" event={"ID":"951b2dc6-7d8d-4f04-8c86-572af9af6000","Type":"ContainerStarted","Data":"ccd3433f7f48395542a57c216c2e96d4067cd317e78c710da10058be7d147aa6"} Nov 28 11:59:36 crc kubenswrapper[5030]: I1128 11:59:36.946850 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tztrm"] Nov 28 11:59:36 crc kubenswrapper[5030]: I1128 11:59:36.948170 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tztrm" Nov 28 11:59:36 crc kubenswrapper[5030]: I1128 11:59:36.950775 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 11:59:36 crc kubenswrapper[5030]: I1128 11:59:36.966316 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tztrm"] Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.065536 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b6b1e4-08cb-4867-b88a-ee08ddcaa045-utilities\") pod \"community-operators-tztrm\" (UID: \"f3b6b1e4-08cb-4867-b88a-ee08ddcaa045\") " pod="openshift-marketplace/community-operators-tztrm" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.065586 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b6b1e4-08cb-4867-b88a-ee08ddcaa045-catalog-content\") pod \"community-operators-tztrm\" (UID: \"f3b6b1e4-08cb-4867-b88a-ee08ddcaa045\") " pod="openshift-marketplace/community-operators-tztrm" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.065624 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx4fx\" (UniqueName: \"kubernetes.io/projected/f3b6b1e4-08cb-4867-b88a-ee08ddcaa045-kube-api-access-zx4fx\") pod \"community-operators-tztrm\" (UID: \"f3b6b1e4-08cb-4867-b88a-ee08ddcaa045\") " pod="openshift-marketplace/community-operators-tztrm" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.147546 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qv7zh"] Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.148520 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qv7zh" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.152113 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.163157 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qv7zh"] Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.167342 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b6b1e4-08cb-4867-b88a-ee08ddcaa045-utilities\") pod \"community-operators-tztrm\" (UID: \"f3b6b1e4-08cb-4867-b88a-ee08ddcaa045\") " pod="openshift-marketplace/community-operators-tztrm" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.167396 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b6b1e4-08cb-4867-b88a-ee08ddcaa045-catalog-content\") pod \"community-operators-tztrm\" (UID: \"f3b6b1e4-08cb-4867-b88a-ee08ddcaa045\") " pod="openshift-marketplace/community-operators-tztrm" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.167536 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx4fx\" (UniqueName: \"kubernetes.io/projected/f3b6b1e4-08cb-4867-b88a-ee08ddcaa045-kube-api-access-zx4fx\") pod \"community-operators-tztrm\" (UID: \"f3b6b1e4-08cb-4867-b88a-ee08ddcaa045\") " pod="openshift-marketplace/community-operators-tztrm" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.167951 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b6b1e4-08cb-4867-b88a-ee08ddcaa045-utilities\") pod \"community-operators-tztrm\" (UID: \"f3b6b1e4-08cb-4867-b88a-ee08ddcaa045\") " pod="openshift-marketplace/community-operators-tztrm" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.168097 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b6b1e4-08cb-4867-b88a-ee08ddcaa045-catalog-content\") pod \"community-operators-tztrm\" (UID: \"f3b6b1e4-08cb-4867-b88a-ee08ddcaa045\") " pod="openshift-marketplace/community-operators-tztrm" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.185671 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx4fx\" (UniqueName: \"kubernetes.io/projected/f3b6b1e4-08cb-4867-b88a-ee08ddcaa045-kube-api-access-zx4fx\") pod \"community-operators-tztrm\" (UID: \"f3b6b1e4-08cb-4867-b88a-ee08ddcaa045\") " pod="openshift-marketplace/community-operators-tztrm" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.264854 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tztrm" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.268520 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64e3a3cf-b757-4bc8-8b2e-acd2cd843e55-catalog-content\") pod \"certified-operators-qv7zh\" (UID: \"64e3a3cf-b757-4bc8-8b2e-acd2cd843e55\") " pod="openshift-marketplace/certified-operators-qv7zh" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.268632 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc5p7\" (UniqueName: \"kubernetes.io/projected/64e3a3cf-b757-4bc8-8b2e-acd2cd843e55-kube-api-access-tc5p7\") pod \"certified-operators-qv7zh\" (UID: \"64e3a3cf-b757-4bc8-8b2e-acd2cd843e55\") " pod="openshift-marketplace/certified-operators-qv7zh" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.268742 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64e3a3cf-b757-4bc8-8b2e-acd2cd843e55-utilities\") pod \"certified-operators-qv7zh\" (UID: \"64e3a3cf-b757-4bc8-8b2e-acd2cd843e55\") " pod="openshift-marketplace/certified-operators-qv7zh" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.370347 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc5p7\" (UniqueName: \"kubernetes.io/projected/64e3a3cf-b757-4bc8-8b2e-acd2cd843e55-kube-api-access-tc5p7\") pod \"certified-operators-qv7zh\" (UID: \"64e3a3cf-b757-4bc8-8b2e-acd2cd843e55\") " pod="openshift-marketplace/certified-operators-qv7zh" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.370760 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64e3a3cf-b757-4bc8-8b2e-acd2cd843e55-utilities\") pod \"certified-operators-qv7zh\" (UID: \"64e3a3cf-b757-4bc8-8b2e-acd2cd843e55\") " pod="openshift-marketplace/certified-operators-qv7zh" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.370830 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64e3a3cf-b757-4bc8-8b2e-acd2cd843e55-catalog-content\") pod \"certified-operators-qv7zh\" (UID: \"64e3a3cf-b757-4bc8-8b2e-acd2cd843e55\") " pod="openshift-marketplace/certified-operators-qv7zh" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.371715 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64e3a3cf-b757-4bc8-8b2e-acd2cd843e55-catalog-content\") pod \"certified-operators-qv7zh\" (UID: \"64e3a3cf-b757-4bc8-8b2e-acd2cd843e55\") " pod="openshift-marketplace/certified-operators-qv7zh" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.371731 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64e3a3cf-b757-4bc8-8b2e-acd2cd843e55-utilities\") pod \"certified-operators-qv7zh\" (UID: \"64e3a3cf-b757-4bc8-8b2e-acd2cd843e55\") " pod="openshift-marketplace/certified-operators-qv7zh" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.388258 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc5p7\" (UniqueName: \"kubernetes.io/projected/64e3a3cf-b757-4bc8-8b2e-acd2cd843e55-kube-api-access-tc5p7\") pod \"certified-operators-qv7zh\" (UID: \"64e3a3cf-b757-4bc8-8b2e-acd2cd843e55\") " pod="openshift-marketplace/certified-operators-qv7zh" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.465341 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qv7zh" Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.725599 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tztrm"] Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.842048 5030 generic.go:334] "Generic (PLEG): container finished" podID="4ffcaa37-8853-409e-aeff-52278c6f2028" containerID="bec049f16fff36f2deef301508ec91ddffdcddcd28dda3f79693f3b83b26503b" exitCode=0 Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.842118 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmp59" event={"ID":"4ffcaa37-8853-409e-aeff-52278c6f2028","Type":"ContainerDied","Data":"bec049f16fff36f2deef301508ec91ddffdcddcd28dda3f79693f3b83b26503b"} Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.843846 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tztrm" event={"ID":"f3b6b1e4-08cb-4867-b88a-ee08ddcaa045","Type":"ContainerStarted","Data":"f1f8f83e854d96a1d5fbe4638ec7010a4d99aa7b2ef7a0ac3e360662295af8fb"} Nov 28 11:59:37 crc kubenswrapper[5030]: I1128 11:59:37.868780 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qv7zh"] Nov 28 11:59:38 crc kubenswrapper[5030]: I1128 11:59:38.851291 5030 generic.go:334] "Generic (PLEG): container finished" podID="951b2dc6-7d8d-4f04-8c86-572af9af6000" containerID="ccd3433f7f48395542a57c216c2e96d4067cd317e78c710da10058be7d147aa6" exitCode=0 Nov 28 11:59:38 crc kubenswrapper[5030]: I1128 11:59:38.851435 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ssp9x" event={"ID":"951b2dc6-7d8d-4f04-8c86-572af9af6000","Type":"ContainerDied","Data":"ccd3433f7f48395542a57c216c2e96d4067cd317e78c710da10058be7d147aa6"} Nov 28 11:59:38 crc kubenswrapper[5030]: I1128 11:59:38.855517 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmp59" event={"ID":"4ffcaa37-8853-409e-aeff-52278c6f2028","Type":"ContainerStarted","Data":"825ca8ddd9c6cf0a587f4c264228494780025c93ed0f044a5f4296187542a547"} Nov 28 11:59:38 crc kubenswrapper[5030]: I1128 11:59:38.860065 5030 generic.go:334] "Generic (PLEG): container finished" podID="f3b6b1e4-08cb-4867-b88a-ee08ddcaa045" containerID="1bd8b33dfbc0f455482d08dfe50fdab117051bf7cc232dbe38b489c1d1611d0d" exitCode=0 Nov 28 11:59:38 crc kubenswrapper[5030]: I1128 11:59:38.860138 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tztrm" event={"ID":"f3b6b1e4-08cb-4867-b88a-ee08ddcaa045","Type":"ContainerDied","Data":"1bd8b33dfbc0f455482d08dfe50fdab117051bf7cc232dbe38b489c1d1611d0d"} Nov 28 11:59:38 crc kubenswrapper[5030]: I1128 11:59:38.861860 5030 generic.go:334] "Generic (PLEG): container finished" podID="64e3a3cf-b757-4bc8-8b2e-acd2cd843e55" containerID="8b32351522613cb2f5ade3e2e60bf0afa222abca67e2e5f08a8ac8b92b63f57a" exitCode=0 Nov 28 11:59:38 crc kubenswrapper[5030]: I1128 11:59:38.861907 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qv7zh" event={"ID":"64e3a3cf-b757-4bc8-8b2e-acd2cd843e55","Type":"ContainerDied","Data":"8b32351522613cb2f5ade3e2e60bf0afa222abca67e2e5f08a8ac8b92b63f57a"} Nov 28 11:59:38 crc kubenswrapper[5030]: I1128 11:59:38.861968 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qv7zh" event={"ID":"64e3a3cf-b757-4bc8-8b2e-acd2cd843e55","Type":"ContainerStarted","Data":"dc3717895d15fcebe7494696d65c7a2f9ebc70b92c1ac50fd257d3b80cef76d0"} Nov 28 11:59:38 crc kubenswrapper[5030]: I1128 11:59:38.929858 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pmp59" podStartSLOduration=2.485027525 podStartE2EDuration="4.929839582s" podCreationTimestamp="2025-11-28 11:59:34 +0000 UTC" firstStartedPulling="2025-11-28 11:59:35.825131233 +0000 UTC m=+393.766873946" lastFinishedPulling="2025-11-28 11:59:38.2699433 +0000 UTC m=+396.211686003" observedRunningTime="2025-11-28 11:59:38.929188893 +0000 UTC m=+396.870931606" watchObservedRunningTime="2025-11-28 11:59:38.929839582 +0000 UTC m=+396.871582265" Nov 28 11:59:39 crc kubenswrapper[5030]: I1128 11:59:39.871751 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ssp9x" event={"ID":"951b2dc6-7d8d-4f04-8c86-572af9af6000","Type":"ContainerStarted","Data":"fd18a65107f77a058d68a525e9da316ad33906fa4f812c3cd7edab6199287314"} Nov 28 11:59:39 crc kubenswrapper[5030]: I1128 11:59:39.877617 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tztrm" event={"ID":"f3b6b1e4-08cb-4867-b88a-ee08ddcaa045","Type":"ContainerStarted","Data":"dcb55b3653b3f6e935fa2a2320b38b1671ec3156f539406d34129fe0733a6602"} Nov 28 11:59:39 crc kubenswrapper[5030]: I1128 11:59:39.885503 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qv7zh" event={"ID":"64e3a3cf-b757-4bc8-8b2e-acd2cd843e55","Type":"ContainerStarted","Data":"735a4584c21067a518f61e2a35333240cbc2ebb4a2d3eafb666d293be5348e7f"} Nov 28 11:59:39 crc kubenswrapper[5030]: I1128 11:59:39.894587 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ssp9x" podStartSLOduration=2.184394325 podStartE2EDuration="5.894570681s" podCreationTimestamp="2025-11-28 11:59:34 +0000 UTC" firstStartedPulling="2025-11-28 11:59:35.82823494 +0000 UTC m=+393.769977653" lastFinishedPulling="2025-11-28 11:59:39.538411326 +0000 UTC m=+397.480154009" observedRunningTime="2025-11-28 11:59:39.891671452 +0000 UTC m=+397.833414145" watchObservedRunningTime="2025-11-28 11:59:39.894570681 +0000 UTC m=+397.836313364" Nov 28 11:59:40 crc kubenswrapper[5030]: I1128 11:59:40.892734 5030 generic.go:334] "Generic (PLEG): container finished" podID="64e3a3cf-b757-4bc8-8b2e-acd2cd843e55" containerID="735a4584c21067a518f61e2a35333240cbc2ebb4a2d3eafb666d293be5348e7f" exitCode=0 Nov 28 11:59:40 crc kubenswrapper[5030]: I1128 11:59:40.892781 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qv7zh" event={"ID":"64e3a3cf-b757-4bc8-8b2e-acd2cd843e55","Type":"ContainerDied","Data":"735a4584c21067a518f61e2a35333240cbc2ebb4a2d3eafb666d293be5348e7f"} Nov 28 11:59:40 crc kubenswrapper[5030]: I1128 11:59:40.895209 5030 generic.go:334] "Generic (PLEG): container finished" podID="f3b6b1e4-08cb-4867-b88a-ee08ddcaa045" containerID="dcb55b3653b3f6e935fa2a2320b38b1671ec3156f539406d34129fe0733a6602" exitCode=0 Nov 28 11:59:40 crc kubenswrapper[5030]: I1128 11:59:40.895239 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tztrm" event={"ID":"f3b6b1e4-08cb-4867-b88a-ee08ddcaa045","Type":"ContainerDied","Data":"dcb55b3653b3f6e935fa2a2320b38b1671ec3156f539406d34129fe0733a6602"} Nov 28 11:59:43 crc kubenswrapper[5030]: I1128 11:59:43.914030 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qv7zh" event={"ID":"64e3a3cf-b757-4bc8-8b2e-acd2cd843e55","Type":"ContainerStarted","Data":"f848628b49f55ddc4e3688427ccc975d90a38789ffb4ece3741067662e08f303"} Nov 28 11:59:43 crc kubenswrapper[5030]: I1128 11:59:43.916320 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tztrm" event={"ID":"f3b6b1e4-08cb-4867-b88a-ee08ddcaa045","Type":"ContainerStarted","Data":"cad77c5de252936f97447e4cc39e01914377cb9253ddf3ef3260846408834854"} Nov 28 11:59:43 crc kubenswrapper[5030]: I1128 11:59:43.943092 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qv7zh" podStartSLOduration=4.369981398 podStartE2EDuration="6.94307012s" podCreationTimestamp="2025-11-28 11:59:37 +0000 UTC" firstStartedPulling="2025-11-28 11:59:38.865704509 +0000 UTC m=+396.807447192" lastFinishedPulling="2025-11-28 11:59:41.438793201 +0000 UTC m=+399.380535914" observedRunningTime="2025-11-28 11:59:43.937557618 +0000 UTC m=+401.879300321" watchObservedRunningTime="2025-11-28 11:59:43.94307012 +0000 UTC m=+401.884812833" Nov 28 11:59:44 crc kubenswrapper[5030]: I1128 11:59:44.934066 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pmp59" Nov 28 11:59:44 crc kubenswrapper[5030]: I1128 11:59:44.934132 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pmp59" Nov 28 11:59:44 crc kubenswrapper[5030]: I1128 11:59:44.997076 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pmp59" Nov 28 11:59:45 crc kubenswrapper[5030]: I1128 11:59:45.017935 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tztrm" podStartSLOduration=6.445659426 podStartE2EDuration="9.017908464s" podCreationTimestamp="2025-11-28 11:59:36 +0000 UTC" firstStartedPulling="2025-11-28 11:59:38.861577295 +0000 UTC m=+396.803319978" lastFinishedPulling="2025-11-28 11:59:41.433826333 +0000 UTC m=+399.375569016" observedRunningTime="2025-11-28 11:59:43.965309855 +0000 UTC m=+401.907052548" watchObservedRunningTime="2025-11-28 11:59:45.017908464 +0000 UTC m=+402.959651157" Nov 28 11:59:45 crc kubenswrapper[5030]: I1128 11:59:45.072317 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ssp9x" Nov 28 11:59:45 crc kubenswrapper[5030]: I1128 11:59:45.072395 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ssp9x" Nov 28 11:59:46 crc kubenswrapper[5030]: I1128 11:59:46.001405 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pmp59" Nov 28 11:59:46 crc kubenswrapper[5030]: I1128 11:59:46.107956 5030 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ssp9x" podUID="951b2dc6-7d8d-4f04-8c86-572af9af6000" containerName="registry-server" probeResult="failure" output=< Nov 28 11:59:46 crc kubenswrapper[5030]: timeout: failed to connect service ":50051" within 1s Nov 28 11:59:46 crc kubenswrapper[5030]: > Nov 28 11:59:47 crc kubenswrapper[5030]: I1128 11:59:47.266023 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tztrm" Nov 28 11:59:47 crc kubenswrapper[5030]: I1128 11:59:47.267565 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tztrm" Nov 28 11:59:47 crc kubenswrapper[5030]: I1128 11:59:47.318644 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tztrm" Nov 28 11:59:47 crc kubenswrapper[5030]: I1128 11:59:47.466652 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qv7zh" Nov 28 11:59:47 crc kubenswrapper[5030]: I1128 11:59:47.466721 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qv7zh" Nov 28 11:59:47 crc kubenswrapper[5030]: I1128 11:59:47.503820 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qv7zh" Nov 28 11:59:49 crc kubenswrapper[5030]: I1128 11:59:49.032176 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tztrm" Nov 28 11:59:55 crc kubenswrapper[5030]: I1128 11:59:55.124273 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ssp9x" Nov 28 11:59:55 crc kubenswrapper[5030]: I1128 11:59:55.175695 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ssp9x" Nov 28 11:59:57 crc kubenswrapper[5030]: I1128 11:59:57.526865 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qv7zh" Nov 28 12:00:00 crc kubenswrapper[5030]: I1128 12:00:00.178973 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405520-wfcf9"] Nov 28 12:00:00 crc kubenswrapper[5030]: I1128 12:00:00.180778 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-wfcf9" Nov 28 12:00:00 crc kubenswrapper[5030]: I1128 12:00:00.183116 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 12:00:00 crc kubenswrapper[5030]: I1128 12:00:00.184495 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 12:00:00 crc kubenswrapper[5030]: I1128 12:00:00.194165 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405520-wfcf9"] Nov 28 12:00:00 crc kubenswrapper[5030]: I1128 12:00:00.309531 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d9ee017-8456-4952-9f02-7398a294a590-config-volume\") pod \"collect-profiles-29405520-wfcf9\" (UID: \"2d9ee017-8456-4952-9f02-7398a294a590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-wfcf9" Nov 28 12:00:00 crc kubenswrapper[5030]: I1128 12:00:00.309581 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d9ee017-8456-4952-9f02-7398a294a590-secret-volume\") pod \"collect-profiles-29405520-wfcf9\" (UID: \"2d9ee017-8456-4952-9f02-7398a294a590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-wfcf9" Nov 28 12:00:00 crc kubenswrapper[5030]: I1128 12:00:00.309632 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gslj\" (UniqueName: \"kubernetes.io/projected/2d9ee017-8456-4952-9f02-7398a294a590-kube-api-access-2gslj\") pod \"collect-profiles-29405520-wfcf9\" (UID: \"2d9ee017-8456-4952-9f02-7398a294a590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-wfcf9" Nov 28 12:00:00 crc kubenswrapper[5030]: I1128 12:00:00.410383 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d9ee017-8456-4952-9f02-7398a294a590-config-volume\") pod \"collect-profiles-29405520-wfcf9\" (UID: \"2d9ee017-8456-4952-9f02-7398a294a590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-wfcf9" Nov 28 12:00:00 crc kubenswrapper[5030]: I1128 12:00:00.410458 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d9ee017-8456-4952-9f02-7398a294a590-secret-volume\") pod \"collect-profiles-29405520-wfcf9\" (UID: \"2d9ee017-8456-4952-9f02-7398a294a590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-wfcf9" Nov 28 12:00:00 crc kubenswrapper[5030]: I1128 12:00:00.410534 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gslj\" (UniqueName: \"kubernetes.io/projected/2d9ee017-8456-4952-9f02-7398a294a590-kube-api-access-2gslj\") pod \"collect-profiles-29405520-wfcf9\" (UID: \"2d9ee017-8456-4952-9f02-7398a294a590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-wfcf9" Nov 28 12:00:00 crc kubenswrapper[5030]: I1128 12:00:00.411980 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d9ee017-8456-4952-9f02-7398a294a590-config-volume\") pod \"collect-profiles-29405520-wfcf9\" (UID: \"2d9ee017-8456-4952-9f02-7398a294a590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-wfcf9" Nov 28 12:00:00 crc kubenswrapper[5030]: I1128 12:00:00.417379 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d9ee017-8456-4952-9f02-7398a294a590-secret-volume\") pod \"collect-profiles-29405520-wfcf9\" (UID: \"2d9ee017-8456-4952-9f02-7398a294a590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-wfcf9" Nov 28 12:00:00 crc kubenswrapper[5030]: I1128 12:00:00.431681 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gslj\" (UniqueName: \"kubernetes.io/projected/2d9ee017-8456-4952-9f02-7398a294a590-kube-api-access-2gslj\") pod \"collect-profiles-29405520-wfcf9\" (UID: \"2d9ee017-8456-4952-9f02-7398a294a590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-wfcf9" Nov 28 12:00:00 crc kubenswrapper[5030]: I1128 12:00:00.503273 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-wfcf9" Nov 28 12:00:01 crc kubenswrapper[5030]: I1128 12:00:00.998057 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405520-wfcf9"] Nov 28 12:00:01 crc kubenswrapper[5030]: I1128 12:00:01.032796 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-wfcf9" event={"ID":"2d9ee017-8456-4952-9f02-7398a294a590","Type":"ContainerStarted","Data":"ecde8bc47162b7cc0f4a51b69ee8fd5e4b56572dbd1747b9a26619925c3c2c55"} Nov 28 12:00:02 crc kubenswrapper[5030]: I1128 12:00:02.039842 5030 generic.go:334] "Generic (PLEG): container finished" podID="2d9ee017-8456-4952-9f02-7398a294a590" containerID="ff9adf5c8cb93f3bf208369b797daf7522c5518810171e6304245f5e829dfa70" exitCode=0 Nov 28 12:00:02 crc kubenswrapper[5030]: I1128 12:00:02.039944 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-wfcf9" event={"ID":"2d9ee017-8456-4952-9f02-7398a294a590","Type":"ContainerDied","Data":"ff9adf5c8cb93f3bf208369b797daf7522c5518810171e6304245f5e829dfa70"} Nov 28 12:00:03 crc kubenswrapper[5030]: I1128 12:00:03.201977 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:00:03 crc kubenswrapper[5030]: I1128 12:00:03.202359 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:00:03 crc kubenswrapper[5030]: I1128 12:00:03.202422 5030 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 12:00:03 crc kubenswrapper[5030]: I1128 12:00:03.203176 5030 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8114faafcf69ecaca67dafc3c5944ffd0ee0fd234807f68465536643254d90e4"} pod="openshift-machine-config-operator/machine-config-daemon-cqr62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 12:00:03 crc kubenswrapper[5030]: I1128 12:00:03.203249 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" containerID="cri-o://8114faafcf69ecaca67dafc3c5944ffd0ee0fd234807f68465536643254d90e4" gracePeriod=600 Nov 28 12:00:03 crc kubenswrapper[5030]: I1128 12:00:03.368573 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-wfcf9" Nov 28 12:00:03 crc kubenswrapper[5030]: I1128 12:00:03.465941 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d9ee017-8456-4952-9f02-7398a294a590-config-volume\") pod \"2d9ee017-8456-4952-9f02-7398a294a590\" (UID: \"2d9ee017-8456-4952-9f02-7398a294a590\") " Nov 28 12:00:03 crc kubenswrapper[5030]: I1128 12:00:03.466374 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gslj\" (UniqueName: \"kubernetes.io/projected/2d9ee017-8456-4952-9f02-7398a294a590-kube-api-access-2gslj\") pod \"2d9ee017-8456-4952-9f02-7398a294a590\" (UID: \"2d9ee017-8456-4952-9f02-7398a294a590\") " Nov 28 12:00:03 crc kubenswrapper[5030]: I1128 12:00:03.466445 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d9ee017-8456-4952-9f02-7398a294a590-secret-volume\") pod \"2d9ee017-8456-4952-9f02-7398a294a590\" (UID: \"2d9ee017-8456-4952-9f02-7398a294a590\") " Nov 28 12:00:03 crc kubenswrapper[5030]: I1128 12:00:03.466785 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d9ee017-8456-4952-9f02-7398a294a590-config-volume" (OuterVolumeSpecName: "config-volume") pod "2d9ee017-8456-4952-9f02-7398a294a590" (UID: "2d9ee017-8456-4952-9f02-7398a294a590"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:00:03 crc kubenswrapper[5030]: I1128 12:00:03.468233 5030 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d9ee017-8456-4952-9f02-7398a294a590-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 12:00:03 crc kubenswrapper[5030]: I1128 12:00:03.472988 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d9ee017-8456-4952-9f02-7398a294a590-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2d9ee017-8456-4952-9f02-7398a294a590" (UID: "2d9ee017-8456-4952-9f02-7398a294a590"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:00:03 crc kubenswrapper[5030]: I1128 12:00:03.476313 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d9ee017-8456-4952-9f02-7398a294a590-kube-api-access-2gslj" (OuterVolumeSpecName: "kube-api-access-2gslj") pod "2d9ee017-8456-4952-9f02-7398a294a590" (UID: "2d9ee017-8456-4952-9f02-7398a294a590"). InnerVolumeSpecName "kube-api-access-2gslj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:00:03 crc kubenswrapper[5030]: I1128 12:00:03.569965 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gslj\" (UniqueName: \"kubernetes.io/projected/2d9ee017-8456-4952-9f02-7398a294a590-kube-api-access-2gslj\") on node \"crc\" DevicePath \"\"" Nov 28 12:00:03 crc kubenswrapper[5030]: I1128 12:00:03.570016 5030 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d9ee017-8456-4952-9f02-7398a294a590-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 12:00:04 crc kubenswrapper[5030]: I1128 12:00:04.054666 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-wfcf9" event={"ID":"2d9ee017-8456-4952-9f02-7398a294a590","Type":"ContainerDied","Data":"ecde8bc47162b7cc0f4a51b69ee8fd5e4b56572dbd1747b9a26619925c3c2c55"} Nov 28 12:00:04 crc kubenswrapper[5030]: I1128 12:00:04.055028 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecde8bc47162b7cc0f4a51b69ee8fd5e4b56572dbd1747b9a26619925c3c2c55" Nov 28 12:00:04 crc kubenswrapper[5030]: I1128 12:00:04.054761 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405520-wfcf9" Nov 28 12:00:04 crc kubenswrapper[5030]: I1128 12:00:04.062560 5030 generic.go:334] "Generic (PLEG): container finished" podID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerID="8114faafcf69ecaca67dafc3c5944ffd0ee0fd234807f68465536643254d90e4" exitCode=0 Nov 28 12:00:04 crc kubenswrapper[5030]: I1128 12:00:04.062628 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerDied","Data":"8114faafcf69ecaca67dafc3c5944ffd0ee0fd234807f68465536643254d90e4"} Nov 28 12:00:04 crc kubenswrapper[5030]: I1128 12:00:04.062670 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerStarted","Data":"8c68d424e56207f9bef5dcba47aa3662682bfcc69409b93749507c45f6366456"} Nov 28 12:00:04 crc kubenswrapper[5030]: I1128 12:00:04.062704 5030 scope.go:117] "RemoveContainer" containerID="9176163dac04fa7a54084b6eb147ee6c8af5556069eb6673d3bb9e8970508f94" Nov 28 12:02:02 crc kubenswrapper[5030]: I1128 12:02:02.760374 5030 scope.go:117] "RemoveContainer" containerID="2e94a3ba737c0befff221395a03fb9f02362e4be5532a815279481638d5592ac" Nov 28 12:02:03 crc kubenswrapper[5030]: I1128 12:02:03.201974 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:02:03 crc kubenswrapper[5030]: I1128 12:02:03.202088 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:02:33 crc kubenswrapper[5030]: I1128 12:02:33.202231 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:02:33 crc kubenswrapper[5030]: I1128 12:02:33.203101 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:03:02 crc kubenswrapper[5030]: I1128 12:03:02.791214 5030 scope.go:117] "RemoveContainer" containerID="c0e9b74db3e474a4dc1792f01b64bbf34e8e69bfafa383efe39f52bad83a52cb" Nov 28 12:03:02 crc kubenswrapper[5030]: I1128 12:03:02.826190 5030 scope.go:117] "RemoveContainer" containerID="a58992aa9e0f559a81c3262b99f07d83e5f62bb73fd821b21a26bdf88eaade9e" Nov 28 12:03:03 crc kubenswrapper[5030]: I1128 12:03:03.202907 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:03:03 crc kubenswrapper[5030]: I1128 12:03:03.203018 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:03:03 crc kubenswrapper[5030]: I1128 12:03:03.203170 5030 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 12:03:03 crc kubenswrapper[5030]: I1128 12:03:03.204008 5030 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c68d424e56207f9bef5dcba47aa3662682bfcc69409b93749507c45f6366456"} pod="openshift-machine-config-operator/machine-config-daemon-cqr62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 12:03:03 crc kubenswrapper[5030]: I1128 12:03:03.204122 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" containerID="cri-o://8c68d424e56207f9bef5dcba47aa3662682bfcc69409b93749507c45f6366456" gracePeriod=600 Nov 28 12:03:03 crc kubenswrapper[5030]: I1128 12:03:03.456975 5030 generic.go:334] "Generic (PLEG): container finished" podID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerID="8c68d424e56207f9bef5dcba47aa3662682bfcc69409b93749507c45f6366456" exitCode=0 Nov 28 12:03:03 crc kubenswrapper[5030]: I1128 12:03:03.457028 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerDied","Data":"8c68d424e56207f9bef5dcba47aa3662682bfcc69409b93749507c45f6366456"} Nov 28 12:03:03 crc kubenswrapper[5030]: I1128 12:03:03.457561 5030 scope.go:117] "RemoveContainer" containerID="8114faafcf69ecaca67dafc3c5944ffd0ee0fd234807f68465536643254d90e4" Nov 28 12:03:04 crc kubenswrapper[5030]: I1128 12:03:04.467050 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerStarted","Data":"1d6b24c1331357c81e9c3721fca85bfc8df7a48f3286c0b8748f4a82dbcaa4eb"} Nov 28 12:04:56 crc kubenswrapper[5030]: I1128 12:04:56.277306 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8vnfr"] Nov 28 12:04:56 crc kubenswrapper[5030]: I1128 12:04:56.279147 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovn-controller" containerID="cri-o://54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff" gracePeriod=30 Nov 28 12:04:56 crc kubenswrapper[5030]: I1128 12:04:56.279258 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="nbdb" containerID="cri-o://50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857" gracePeriod=30 Nov 28 12:04:56 crc kubenswrapper[5030]: I1128 12:04:56.279344 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="northd" containerID="cri-o://ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294" gracePeriod=30 Nov 28 12:04:56 crc kubenswrapper[5030]: I1128 12:04:56.279379 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="sbdb" containerID="cri-o://7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7" gracePeriod=30 Nov 28 12:04:56 crc kubenswrapper[5030]: I1128 12:04:56.279462 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovn-acl-logging" containerID="cri-o://fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb" gracePeriod=30 Nov 28 12:04:56 crc kubenswrapper[5030]: I1128 12:04:56.279544 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="kube-rbac-proxy-node" containerID="cri-o://e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2" gracePeriod=30 Nov 28 12:04:56 crc kubenswrapper[5030]: I1128 12:04:56.279658 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1" gracePeriod=30 Nov 28 12:04:56 crc kubenswrapper[5030]: I1128 12:04:56.353257 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovnkube-controller" containerID="cri-o://5a6f6d706fba68f794de96394a58708bb284b375ac3193a214cd4f55b207d8d1" gracePeriod=30 Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.280111 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfz78_4ee84379-3754-48c5-aaab-15dbc36caa16/kube-multus/2.log" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.281474 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfz78_4ee84379-3754-48c5-aaab-15dbc36caa16/kube-multus/1.log" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.281560 5030 generic.go:334] "Generic (PLEG): container finished" podID="4ee84379-3754-48c5-aaab-15dbc36caa16" containerID="018e3d90020cc03b39dc0110a6414d3de5aa9a5b4fdff14fe5f0fec5829fd973" exitCode=2 Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.281679 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kfz78" event={"ID":"4ee84379-3754-48c5-aaab-15dbc36caa16","Type":"ContainerDied","Data":"018e3d90020cc03b39dc0110a6414d3de5aa9a5b4fdff14fe5f0fec5829fd973"} Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.281770 5030 scope.go:117] "RemoveContainer" containerID="7589f5a1f3ffa2039e76ad57648413ed1c1a7b0047e023696616bf1ac679be7e" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.282431 5030 scope.go:117] "RemoveContainer" containerID="018e3d90020cc03b39dc0110a6414d3de5aa9a5b4fdff14fe5f0fec5829fd973" Nov 28 12:04:57 crc kubenswrapper[5030]: E1128 12:04:57.283755 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-kfz78_openshift-multus(4ee84379-3754-48c5-aaab-15dbc36caa16)\"" pod="openshift-multus/multus-kfz78" podUID="4ee84379-3754-48c5-aaab-15dbc36caa16" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.293224 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovnkube-controller/3.log" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.297839 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovn-acl-logging/0.log" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.298793 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovn-controller/0.log" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.299391 5030 generic.go:334] "Generic (PLEG): container finished" podID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerID="5a6f6d706fba68f794de96394a58708bb284b375ac3193a214cd4f55b207d8d1" exitCode=0 Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.299427 5030 generic.go:334] "Generic (PLEG): container finished" podID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerID="7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7" exitCode=0 Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.299442 5030 generic.go:334] "Generic (PLEG): container finished" podID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerID="50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857" exitCode=0 Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.299461 5030 generic.go:334] "Generic (PLEG): container finished" podID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerID="ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294" exitCode=0 Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.299489 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerDied","Data":"5a6f6d706fba68f794de96394a58708bb284b375ac3193a214cd4f55b207d8d1"} Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.299562 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerDied","Data":"7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7"} Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.299585 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerDied","Data":"50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857"} Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.299606 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerDied","Data":"ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294"} Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.299624 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerDied","Data":"f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1"} Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.299505 5030 generic.go:334] "Generic (PLEG): container finished" podID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerID="f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1" exitCode=0 Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.299662 5030 generic.go:334] "Generic (PLEG): container finished" podID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerID="e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2" exitCode=0 Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.299676 5030 generic.go:334] "Generic (PLEG): container finished" podID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerID="fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb" exitCode=143 Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.299690 5030 generic.go:334] "Generic (PLEG): container finished" podID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerID="54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff" exitCode=143 Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.299738 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerDied","Data":"e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2"} Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.299816 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerDied","Data":"fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb"} Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.299856 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerDied","Data":"54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff"} Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.400684 5030 scope.go:117] "RemoveContainer" containerID="7c83a86b6d8245c06d7b2c89bb2566f93b9b510fe447390ef3c98a1fa16e1116" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.505638 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovn-acl-logging/0.log" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.506830 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovn-controller/0.log" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.509439 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.573769 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-s2m95"] Nov 28 12:04:57 crc kubenswrapper[5030]: E1128 12:04:57.574021 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="nbdb" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574037 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="nbdb" Nov 28 12:04:57 crc kubenswrapper[5030]: E1128 12:04:57.574053 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovnkube-controller" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574063 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovnkube-controller" Nov 28 12:04:57 crc kubenswrapper[5030]: E1128 12:04:57.574072 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovnkube-controller" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574082 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovnkube-controller" Nov 28 12:04:57 crc kubenswrapper[5030]: E1128 12:04:57.574093 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="northd" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574102 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="northd" Nov 28 12:04:57 crc kubenswrapper[5030]: E1128 12:04:57.574111 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574119 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 12:04:57 crc kubenswrapper[5030]: E1128 12:04:57.574128 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="kube-rbac-proxy-node" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574136 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="kube-rbac-proxy-node" Nov 28 12:04:57 crc kubenswrapper[5030]: E1128 12:04:57.574147 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d9ee017-8456-4952-9f02-7398a294a590" containerName="collect-profiles" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574156 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d9ee017-8456-4952-9f02-7398a294a590" containerName="collect-profiles" Nov 28 12:04:57 crc kubenswrapper[5030]: E1128 12:04:57.574169 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovn-acl-logging" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574177 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovn-acl-logging" Nov 28 12:04:57 crc kubenswrapper[5030]: E1128 12:04:57.574193 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovn-controller" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574201 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovn-controller" Nov 28 12:04:57 crc kubenswrapper[5030]: E1128 12:04:57.574212 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="kubecfg-setup" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574219 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="kubecfg-setup" Nov 28 12:04:57 crc kubenswrapper[5030]: E1128 12:04:57.574234 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="sbdb" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574245 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="sbdb" Nov 28 12:04:57 crc kubenswrapper[5030]: E1128 12:04:57.574255 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovnkube-controller" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574265 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovnkube-controller" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574394 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovn-acl-logging" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574410 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovn-controller" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574420 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="northd" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574430 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovnkube-controller" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574440 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovnkube-controller" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574450 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574459 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d9ee017-8456-4952-9f02-7398a294a590" containerName="collect-profiles" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574478 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="nbdb" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574487 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="sbdb" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574533 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="kube-rbac-proxy-node" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574546 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovnkube-controller" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574556 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovnkube-controller" Nov 28 12:04:57 crc kubenswrapper[5030]: E1128 12:04:57.574666 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovnkube-controller" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574677 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovnkube-controller" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574787 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovnkube-controller" Nov 28 12:04:57 crc kubenswrapper[5030]: E1128 12:04:57.574897 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovnkube-controller" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.574906 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" containerName="ovnkube-controller" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.576790 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.608366 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-ovnkube-script-lib\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.608448 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-slash\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.608547 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-systemd-units\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.608605 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xgmb\" (UniqueName: \"kubernetes.io/projected/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-kube-api-access-9xgmb\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.608653 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-run-ovn\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.608696 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-cni-netd\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.608685 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-slash" (OuterVolumeSpecName: "host-slash") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.608743 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-node-log\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.608785 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-run-netns\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.608723 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.608823 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-var-lib-openvswitch\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.608778 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-node-log" (OuterVolumeSpecName: "node-log") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.608829 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.608918 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.608890 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-cni-bin\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.608950 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.608938 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.608922 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609020 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-kubelet\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609060 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-run-ovn-kubernetes\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609107 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-env-overrides\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609062 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609162 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-ovnkube-config\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609198 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-run-openvswitch\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609209 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609235 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-ovn-node-metrics-cert\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609276 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-run-systemd\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609307 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-etc-openvswitch\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609336 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-var-lib-cni-networks-ovn-kubernetes\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609382 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-log-socket\") pod \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\" (UID: \"44c9601c-cc85-4e79-aadd-8d20e2ea9f12\") " Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609264 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609310 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609584 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609593 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609634 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-log-socket" (OuterVolumeSpecName: "log-socket") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609790 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609853 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609911 5030 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.609977 5030 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.610001 5030 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.610015 5030 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-node-log\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.610028 5030 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.610041 5030 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.610055 5030 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.610068 5030 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.610081 5030 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.610096 5030 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.610109 5030 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.610120 5030 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.610132 5030 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.610145 5030 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-log-socket\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.610158 5030 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.610170 5030 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-host-slash\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.615778 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.616602 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-kube-api-access-9xgmb" (OuterVolumeSpecName: "kube-api-access-9xgmb") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "kube-api-access-9xgmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.632608 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "44c9601c-cc85-4e79-aadd-8d20e2ea9f12" (UID: "44c9601c-cc85-4e79-aadd-8d20e2ea9f12"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.711410 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-ovn-node-metrics-cert\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.711525 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-log-socket\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.711578 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-run-netns\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.711620 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-run-ovn\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.711657 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-cni-bin\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.711737 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-ovnkube-script-lib\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.711777 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-ovnkube-config\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.711817 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ql7q\" (UniqueName: \"kubernetes.io/projected/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-kube-api-access-2ql7q\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.711885 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-env-overrides\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.712000 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-slash\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.712143 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-var-lib-openvswitch\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.712172 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-etc-openvswitch\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.712200 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-node-log\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.712310 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-run-systemd\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.712384 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.712456 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-cni-netd\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.712573 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-run-openvswitch\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.712640 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-kubelet\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.712680 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-run-ovn-kubernetes\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.712716 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-systemd-units\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.712995 5030 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.713068 5030 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.713096 5030 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.713121 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xgmb\" (UniqueName: \"kubernetes.io/projected/44c9601c-cc85-4e79-aadd-8d20e2ea9f12-kube-api-access-9xgmb\") on node \"crc\" DevicePath \"\"" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.813971 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-run-openvswitch\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814026 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-kubelet\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814044 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-run-ovn-kubernetes\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814061 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-systemd-units\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814088 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-ovn-node-metrics-cert\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814108 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-log-socket\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814126 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-run-netns\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814143 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-run-ovn\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814162 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-cni-bin\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814177 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-ovnkube-script-lib\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814195 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-ovnkube-config\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814212 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ql7q\" (UniqueName: \"kubernetes.io/projected/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-kube-api-access-2ql7q\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814217 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-kubelet\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814273 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-cni-bin\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814226 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-run-ovn-kubernetes\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814362 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-systemd-units\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814386 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-log-socket\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814427 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-run-netns\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814470 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-run-ovn\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814237 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-env-overrides\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814543 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-slash\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814592 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-var-lib-openvswitch\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814615 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-etc-openvswitch\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814640 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-node-log\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814677 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-run-systemd\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814716 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-cni-netd\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814739 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814843 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814863 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-env-overrides\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814883 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-slash\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814913 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-run-systemd\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814921 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-host-cni-netd\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814950 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-var-lib-openvswitch\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.814950 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-etc-openvswitch\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.815046 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-node-log\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.815309 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-ovnkube-config\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.815521 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-ovnkube-script-lib\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.816356 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-run-openvswitch\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.821054 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-ovn-node-metrics-cert\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.831532 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ql7q\" (UniqueName: \"kubernetes.io/projected/6aebb2cd-7ccd-487a-89f7-30dc6d942d30-kube-api-access-2ql7q\") pod \"ovnkube-node-s2m95\" (UID: \"6aebb2cd-7ccd-487a-89f7-30dc6d942d30\") " pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: I1128 12:04:57.897171 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:04:57 crc kubenswrapper[5030]: W1128 12:04:57.925690 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6aebb2cd_7ccd_487a_89f7_30dc6d942d30.slice/crio-ad0f407d8229a5a70eb35034e1893f2e04f2cc2f964a34547397958d20d7b1c6 WatchSource:0}: Error finding container ad0f407d8229a5a70eb35034e1893f2e04f2cc2f964a34547397958d20d7b1c6: Status 404 returned error can't find the container with id ad0f407d8229a5a70eb35034e1893f2e04f2cc2f964a34547397958d20d7b1c6 Nov 28 12:04:58 crc kubenswrapper[5030]: I1128 12:04:58.322836 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfz78_4ee84379-3754-48c5-aaab-15dbc36caa16/kube-multus/2.log" Nov 28 12:04:58 crc kubenswrapper[5030]: I1128 12:04:58.326377 5030 generic.go:334] "Generic (PLEG): container finished" podID="6aebb2cd-7ccd-487a-89f7-30dc6d942d30" containerID="9a43c59c1d306ff677370567f1db9cafcee46b4226325f0b7a60fa506020dc7e" exitCode=0 Nov 28 12:04:58 crc kubenswrapper[5030]: I1128 12:04:58.326517 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" event={"ID":"6aebb2cd-7ccd-487a-89f7-30dc6d942d30","Type":"ContainerDied","Data":"9a43c59c1d306ff677370567f1db9cafcee46b4226325f0b7a60fa506020dc7e"} Nov 28 12:04:58 crc kubenswrapper[5030]: I1128 12:04:58.326585 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" event={"ID":"6aebb2cd-7ccd-487a-89f7-30dc6d942d30","Type":"ContainerStarted","Data":"ad0f407d8229a5a70eb35034e1893f2e04f2cc2f964a34547397958d20d7b1c6"} Nov 28 12:04:58 crc kubenswrapper[5030]: I1128 12:04:58.337669 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovn-acl-logging/0.log" Nov 28 12:04:58 crc kubenswrapper[5030]: I1128 12:04:58.338573 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8vnfr_44c9601c-cc85-4e79-aadd-8d20e2ea9f12/ovn-controller/0.log" Nov 28 12:04:58 crc kubenswrapper[5030]: I1128 12:04:58.340437 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" event={"ID":"44c9601c-cc85-4e79-aadd-8d20e2ea9f12","Type":"ContainerDied","Data":"3e028ea9d3bf1d8a39325d8ffd5fb17e5d86435c2af3d682ae2b5dac6621ed9d"} Nov 28 12:04:58 crc kubenswrapper[5030]: I1128 12:04:58.340543 5030 scope.go:117] "RemoveContainer" containerID="5a6f6d706fba68f794de96394a58708bb284b375ac3193a214cd4f55b207d8d1" Nov 28 12:04:58 crc kubenswrapper[5030]: I1128 12:04:58.340583 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8vnfr" Nov 28 12:04:58 crc kubenswrapper[5030]: I1128 12:04:58.376643 5030 scope.go:117] "RemoveContainer" containerID="7d5f07d8139a9c9baac00a6de37b7529a54fadd9fea35d85f9352ed404b208e7" Nov 28 12:04:58 crc kubenswrapper[5030]: I1128 12:04:58.421266 5030 scope.go:117] "RemoveContainer" containerID="50e82bb67d187ea3c2534403399702026380f9c1bbbf9f7b252ab10c48467857" Nov 28 12:04:58 crc kubenswrapper[5030]: I1128 12:04:58.431674 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8vnfr"] Nov 28 12:04:58 crc kubenswrapper[5030]: I1128 12:04:58.444025 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8vnfr"] Nov 28 12:04:58 crc kubenswrapper[5030]: I1128 12:04:58.480204 5030 scope.go:117] "RemoveContainer" containerID="ff4a6ee839bbd8b10e64be7788abb65caa8fd4fe57a43cd2abdeba06dd098294" Nov 28 12:04:58 crc kubenswrapper[5030]: I1128 12:04:58.493869 5030 scope.go:117] "RemoveContainer" containerID="f939cc69f11195d2d2989ea1febd9683388436eb07e1b582512ce0a5363260b1" Nov 28 12:04:58 crc kubenswrapper[5030]: I1128 12:04:58.508105 5030 scope.go:117] "RemoveContainer" containerID="e764d8b253b0d17a6582767febb99208382bf978b8188c78a0a49c15b61cc8e2" Nov 28 12:04:58 crc kubenswrapper[5030]: I1128 12:04:58.525305 5030 scope.go:117] "RemoveContainer" containerID="fcd99d286bae3b830e16145d702659afba8f6c4c7966159a2cdd6dbcf2bd52eb" Nov 28 12:04:58 crc kubenswrapper[5030]: I1128 12:04:58.554488 5030 scope.go:117] "RemoveContainer" containerID="54fbc9292498bbe784d715952f50be62f513b513dd02037be7c68bfbd48bafff" Nov 28 12:04:58 crc kubenswrapper[5030]: I1128 12:04:58.574122 5030 scope.go:117] "RemoveContainer" containerID="86d40b1e6034e31a5a82641f4ca31e959cc86688f4ddb908dbff9b9ed1853769" Nov 28 12:04:59 crc kubenswrapper[5030]: I1128 12:04:59.352381 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" event={"ID":"6aebb2cd-7ccd-487a-89f7-30dc6d942d30","Type":"ContainerStarted","Data":"4e3ba5e8d96a9ce8669e8db8286d99ff3eb49f6a1d5d3fbe057984afffb204ce"} Nov 28 12:04:59 crc kubenswrapper[5030]: I1128 12:04:59.352857 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" event={"ID":"6aebb2cd-7ccd-487a-89f7-30dc6d942d30","Type":"ContainerStarted","Data":"838562c709236abf86d5c3a73a95ee56af4a5a271210d06d1396291224fcdc6b"} Nov 28 12:04:59 crc kubenswrapper[5030]: I1128 12:04:59.352890 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" event={"ID":"6aebb2cd-7ccd-487a-89f7-30dc6d942d30","Type":"ContainerStarted","Data":"3ad410116ea3897ba283a29fbc5c1c537b9ff6eb74cede6282c536be8b2c7c26"} Nov 28 12:04:59 crc kubenswrapper[5030]: I1128 12:04:59.352912 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" event={"ID":"6aebb2cd-7ccd-487a-89f7-30dc6d942d30","Type":"ContainerStarted","Data":"3fcb1d923af3cf0356c031273e06336c7ce533ff8089061e7a241be4a499796e"} Nov 28 12:04:59 crc kubenswrapper[5030]: I1128 12:04:59.352930 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" event={"ID":"6aebb2cd-7ccd-487a-89f7-30dc6d942d30","Type":"ContainerStarted","Data":"3eaec1bcd3e6b2e1161584979b3e8240598a28922bcd782dc0056170a8acc65c"} Nov 28 12:05:00 crc kubenswrapper[5030]: I1128 12:05:00.402963 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44c9601c-cc85-4e79-aadd-8d20e2ea9f12" path="/var/lib/kubelet/pods/44c9601c-cc85-4e79-aadd-8d20e2ea9f12/volumes" Nov 28 12:05:03 crc kubenswrapper[5030]: I1128 12:05:03.202561 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:05:03 crc kubenswrapper[5030]: I1128 12:05:03.203027 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:05:03 crc kubenswrapper[5030]: I1128 12:05:03.395899 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" event={"ID":"6aebb2cd-7ccd-487a-89f7-30dc6d942d30","Type":"ContainerStarted","Data":"ab5bdd7eea618edd27332dba0392d1939845dad8cefcf4dfbba789573880a95d"} Nov 28 12:05:05 crc kubenswrapper[5030]: I1128 12:05:05.417540 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" event={"ID":"6aebb2cd-7ccd-487a-89f7-30dc6d942d30","Type":"ContainerStarted","Data":"198d314e8730f09e634b6e787e1a443e1c785cc3616f2b9a6ccd3cc697a79f84"} Nov 28 12:05:07 crc kubenswrapper[5030]: I1128 12:05:07.437290 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" event={"ID":"6aebb2cd-7ccd-487a-89f7-30dc6d942d30","Type":"ContainerStarted","Data":"0c2c66fe49689201337b950eb9dc29bf20f41e9cefbdd30ea1be5d56ce87d31c"} Nov 28 12:05:07 crc kubenswrapper[5030]: I1128 12:05:07.438894 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:05:07 crc kubenswrapper[5030]: I1128 12:05:07.439115 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:05:07 crc kubenswrapper[5030]: I1128 12:05:07.439215 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:05:07 crc kubenswrapper[5030]: I1128 12:05:07.484239 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:05:07 crc kubenswrapper[5030]: I1128 12:05:07.501541 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" podStartSLOduration=10.501515374 podStartE2EDuration="10.501515374s" podCreationTimestamp="2025-11-28 12:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:05:07.489153935 +0000 UTC m=+725.430896628" watchObservedRunningTime="2025-11-28 12:05:07.501515374 +0000 UTC m=+725.443258067" Nov 28 12:05:07 crc kubenswrapper[5030]: I1128 12:05:07.536930 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:05:11 crc kubenswrapper[5030]: I1128 12:05:11.392588 5030 scope.go:117] "RemoveContainer" containerID="018e3d90020cc03b39dc0110a6414d3de5aa9a5b4fdff14fe5f0fec5829fd973" Nov 28 12:05:12 crc kubenswrapper[5030]: I1128 12:05:12.478671 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kfz78_4ee84379-3754-48c5-aaab-15dbc36caa16/kube-multus/2.log" Nov 28 12:05:12 crc kubenswrapper[5030]: I1128 12:05:12.479149 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kfz78" event={"ID":"4ee84379-3754-48c5-aaab-15dbc36caa16","Type":"ContainerStarted","Data":"7a5db062291d89f78a269f62b6d8f81f4bcb2e0c95a5ef4edfe0beae746087a0"} Nov 28 12:05:27 crc kubenswrapper[5030]: I1128 12:05:27.933901 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-s2m95" Nov 28 12:05:32 crc kubenswrapper[5030]: I1128 12:05:32.211920 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm"] Nov 28 12:05:32 crc kubenswrapper[5030]: I1128 12:05:32.214071 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm" Nov 28 12:05:32 crc kubenswrapper[5030]: I1128 12:05:32.216290 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 12:05:32 crc kubenswrapper[5030]: I1128 12:05:32.227346 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm"] Nov 28 12:05:32 crc kubenswrapper[5030]: I1128 12:05:32.273005 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64a19be7-4e6b-43eb-9ebd-93a60054b661-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm\" (UID: \"64a19be7-4e6b-43eb-9ebd-93a60054b661\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm" Nov 28 12:05:32 crc kubenswrapper[5030]: I1128 12:05:32.273079 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64a19be7-4e6b-43eb-9ebd-93a60054b661-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm\" (UID: \"64a19be7-4e6b-43eb-9ebd-93a60054b661\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm" Nov 28 12:05:32 crc kubenswrapper[5030]: I1128 12:05:32.273104 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kpsr\" (UniqueName: \"kubernetes.io/projected/64a19be7-4e6b-43eb-9ebd-93a60054b661-kube-api-access-2kpsr\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm\" (UID: \"64a19be7-4e6b-43eb-9ebd-93a60054b661\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm" Nov 28 12:05:32 crc kubenswrapper[5030]: I1128 12:05:32.374451 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64a19be7-4e6b-43eb-9ebd-93a60054b661-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm\" (UID: \"64a19be7-4e6b-43eb-9ebd-93a60054b661\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm" Nov 28 12:05:32 crc kubenswrapper[5030]: I1128 12:05:32.374987 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64a19be7-4e6b-43eb-9ebd-93a60054b661-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm\" (UID: \"64a19be7-4e6b-43eb-9ebd-93a60054b661\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm" Nov 28 12:05:32 crc kubenswrapper[5030]: I1128 12:05:32.375036 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kpsr\" (UniqueName: \"kubernetes.io/projected/64a19be7-4e6b-43eb-9ebd-93a60054b661-kube-api-access-2kpsr\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm\" (UID: \"64a19be7-4e6b-43eb-9ebd-93a60054b661\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm" Nov 28 12:05:32 crc kubenswrapper[5030]: I1128 12:05:32.375335 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64a19be7-4e6b-43eb-9ebd-93a60054b661-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm\" (UID: \"64a19be7-4e6b-43eb-9ebd-93a60054b661\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm" Nov 28 12:05:32 crc kubenswrapper[5030]: I1128 12:05:32.375673 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64a19be7-4e6b-43eb-9ebd-93a60054b661-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm\" (UID: \"64a19be7-4e6b-43eb-9ebd-93a60054b661\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm" Nov 28 12:05:32 crc kubenswrapper[5030]: I1128 12:05:32.399612 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kpsr\" (UniqueName: \"kubernetes.io/projected/64a19be7-4e6b-43eb-9ebd-93a60054b661-kube-api-access-2kpsr\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm\" (UID: \"64a19be7-4e6b-43eb-9ebd-93a60054b661\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm" Nov 28 12:05:32 crc kubenswrapper[5030]: I1128 12:05:32.584061 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm" Nov 28 12:05:32 crc kubenswrapper[5030]: I1128 12:05:32.825228 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm"] Nov 28 12:05:32 crc kubenswrapper[5030]: W1128 12:05:32.838295 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64a19be7_4e6b_43eb_9ebd_93a60054b661.slice/crio-b4a032d7cddcfe06ae60713bdc85021a3c43c88151d0dd9af114bd2b57152e13 WatchSource:0}: Error finding container b4a032d7cddcfe06ae60713bdc85021a3c43c88151d0dd9af114bd2b57152e13: Status 404 returned error can't find the container with id b4a032d7cddcfe06ae60713bdc85021a3c43c88151d0dd9af114bd2b57152e13 Nov 28 12:05:33 crc kubenswrapper[5030]: I1128 12:05:33.026203 5030 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 12:05:33 crc kubenswrapper[5030]: I1128 12:05:33.201705 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:05:33 crc kubenswrapper[5030]: I1128 12:05:33.201821 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:05:33 crc kubenswrapper[5030]: I1128 12:05:33.633577 5030 generic.go:334] "Generic (PLEG): container finished" podID="64a19be7-4e6b-43eb-9ebd-93a60054b661" containerID="f4cce34db485c15ad35de0ed7ecae3c15e2c18280fbf059a077663b22403ab59" exitCode=0 Nov 28 12:05:33 crc kubenswrapper[5030]: I1128 12:05:33.633663 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm" event={"ID":"64a19be7-4e6b-43eb-9ebd-93a60054b661","Type":"ContainerDied","Data":"f4cce34db485c15ad35de0ed7ecae3c15e2c18280fbf059a077663b22403ab59"} Nov 28 12:05:33 crc kubenswrapper[5030]: I1128 12:05:33.633756 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm" event={"ID":"64a19be7-4e6b-43eb-9ebd-93a60054b661","Type":"ContainerStarted","Data":"b4a032d7cddcfe06ae60713bdc85021a3c43c88151d0dd9af114bd2b57152e13"} Nov 28 12:05:33 crc kubenswrapper[5030]: I1128 12:05:33.638058 5030 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 12:05:34 crc kubenswrapper[5030]: I1128 12:05:34.539907 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4556w"] Nov 28 12:05:34 crc kubenswrapper[5030]: I1128 12:05:34.542253 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4556w" Nov 28 12:05:34 crc kubenswrapper[5030]: I1128 12:05:34.561601 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4556w"] Nov 28 12:05:34 crc kubenswrapper[5030]: I1128 12:05:34.605785 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080389f5-c012-4b73-aba3-895f3e179384-utilities\") pod \"redhat-operators-4556w\" (UID: \"080389f5-c012-4b73-aba3-895f3e179384\") " pod="openshift-marketplace/redhat-operators-4556w" Nov 28 12:05:34 crc kubenswrapper[5030]: I1128 12:05:34.606680 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcknr\" (UniqueName: \"kubernetes.io/projected/080389f5-c012-4b73-aba3-895f3e179384-kube-api-access-zcknr\") pod \"redhat-operators-4556w\" (UID: \"080389f5-c012-4b73-aba3-895f3e179384\") " pod="openshift-marketplace/redhat-operators-4556w" Nov 28 12:05:34 crc kubenswrapper[5030]: I1128 12:05:34.606813 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080389f5-c012-4b73-aba3-895f3e179384-catalog-content\") pod \"redhat-operators-4556w\" (UID: \"080389f5-c012-4b73-aba3-895f3e179384\") " pod="openshift-marketplace/redhat-operators-4556w" Nov 28 12:05:34 crc kubenswrapper[5030]: I1128 12:05:34.707704 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080389f5-c012-4b73-aba3-895f3e179384-catalog-content\") pod \"redhat-operators-4556w\" (UID: \"080389f5-c012-4b73-aba3-895f3e179384\") " pod="openshift-marketplace/redhat-operators-4556w" Nov 28 12:05:34 crc kubenswrapper[5030]: I1128 12:05:34.707789 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080389f5-c012-4b73-aba3-895f3e179384-utilities\") pod \"redhat-operators-4556w\" (UID: \"080389f5-c012-4b73-aba3-895f3e179384\") " pod="openshift-marketplace/redhat-operators-4556w" Nov 28 12:05:34 crc kubenswrapper[5030]: I1128 12:05:34.707851 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcknr\" (UniqueName: \"kubernetes.io/projected/080389f5-c012-4b73-aba3-895f3e179384-kube-api-access-zcknr\") pod \"redhat-operators-4556w\" (UID: \"080389f5-c012-4b73-aba3-895f3e179384\") " pod="openshift-marketplace/redhat-operators-4556w" Nov 28 12:05:34 crc kubenswrapper[5030]: I1128 12:05:34.708396 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080389f5-c012-4b73-aba3-895f3e179384-catalog-content\") pod \"redhat-operators-4556w\" (UID: \"080389f5-c012-4b73-aba3-895f3e179384\") " pod="openshift-marketplace/redhat-operators-4556w" Nov 28 12:05:34 crc kubenswrapper[5030]: I1128 12:05:34.708451 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080389f5-c012-4b73-aba3-895f3e179384-utilities\") pod \"redhat-operators-4556w\" (UID: \"080389f5-c012-4b73-aba3-895f3e179384\") " pod="openshift-marketplace/redhat-operators-4556w" Nov 28 12:05:34 crc kubenswrapper[5030]: I1128 12:05:34.737313 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcknr\" (UniqueName: \"kubernetes.io/projected/080389f5-c012-4b73-aba3-895f3e179384-kube-api-access-zcknr\") pod \"redhat-operators-4556w\" (UID: \"080389f5-c012-4b73-aba3-895f3e179384\") " pod="openshift-marketplace/redhat-operators-4556w" Nov 28 12:05:34 crc kubenswrapper[5030]: I1128 12:05:34.909352 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4556w" Nov 28 12:05:35 crc kubenswrapper[5030]: I1128 12:05:35.209277 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4556w"] Nov 28 12:05:35 crc kubenswrapper[5030]: W1128 12:05:35.221865 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod080389f5_c012_4b73_aba3_895f3e179384.slice/crio-aa8ea97b804a4698f2fa7b398fac0b8dbefef295eadc50214955859508012b54 WatchSource:0}: Error finding container aa8ea97b804a4698f2fa7b398fac0b8dbefef295eadc50214955859508012b54: Status 404 returned error can't find the container with id aa8ea97b804a4698f2fa7b398fac0b8dbefef295eadc50214955859508012b54 Nov 28 12:05:35 crc kubenswrapper[5030]: I1128 12:05:35.644602 5030 generic.go:334] "Generic (PLEG): container finished" podID="080389f5-c012-4b73-aba3-895f3e179384" containerID="d3389220d047eaa925fde478a216feb89b03393a800926c7e22533642c38ed12" exitCode=0 Nov 28 12:05:35 crc kubenswrapper[5030]: I1128 12:05:35.644667 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4556w" event={"ID":"080389f5-c012-4b73-aba3-895f3e179384","Type":"ContainerDied","Data":"d3389220d047eaa925fde478a216feb89b03393a800926c7e22533642c38ed12"} Nov 28 12:05:35 crc kubenswrapper[5030]: I1128 12:05:35.644941 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4556w" event={"ID":"080389f5-c012-4b73-aba3-895f3e179384","Type":"ContainerStarted","Data":"aa8ea97b804a4698f2fa7b398fac0b8dbefef295eadc50214955859508012b54"} Nov 28 12:05:35 crc kubenswrapper[5030]: I1128 12:05:35.647897 5030 generic.go:334] "Generic (PLEG): container finished" podID="64a19be7-4e6b-43eb-9ebd-93a60054b661" containerID="4cbf19b022821812417f6a981b82e40ac09876268e750debd7bd9820a0a89af2" exitCode=0 Nov 28 12:05:35 crc kubenswrapper[5030]: I1128 12:05:35.647962 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm" event={"ID":"64a19be7-4e6b-43eb-9ebd-93a60054b661","Type":"ContainerDied","Data":"4cbf19b022821812417f6a981b82e40ac09876268e750debd7bd9820a0a89af2"} Nov 28 12:05:36 crc kubenswrapper[5030]: I1128 12:05:36.661176 5030 generic.go:334] "Generic (PLEG): container finished" podID="64a19be7-4e6b-43eb-9ebd-93a60054b661" containerID="781eef246119d1d2f3cddc7e1284e184b34c0be9ae2e80f1ba1152f706fc5729" exitCode=0 Nov 28 12:05:36 crc kubenswrapper[5030]: I1128 12:05:36.661232 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm" event={"ID":"64a19be7-4e6b-43eb-9ebd-93a60054b661","Type":"ContainerDied","Data":"781eef246119d1d2f3cddc7e1284e184b34c0be9ae2e80f1ba1152f706fc5729"} Nov 28 12:05:37 crc kubenswrapper[5030]: I1128 12:05:37.669875 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4556w" event={"ID":"080389f5-c012-4b73-aba3-895f3e179384","Type":"ContainerStarted","Data":"31d6558face6946428a738f2370dd1acf81153d3557a9ab7ab82472b6ba4ad90"} Nov 28 12:05:37 crc kubenswrapper[5030]: I1128 12:05:37.968339 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm" Nov 28 12:05:38 crc kubenswrapper[5030]: I1128 12:05:38.141923 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64a19be7-4e6b-43eb-9ebd-93a60054b661-util\") pod \"64a19be7-4e6b-43eb-9ebd-93a60054b661\" (UID: \"64a19be7-4e6b-43eb-9ebd-93a60054b661\") " Nov 28 12:05:38 crc kubenswrapper[5030]: I1128 12:05:38.141988 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64a19be7-4e6b-43eb-9ebd-93a60054b661-bundle\") pod \"64a19be7-4e6b-43eb-9ebd-93a60054b661\" (UID: \"64a19be7-4e6b-43eb-9ebd-93a60054b661\") " Nov 28 12:05:38 crc kubenswrapper[5030]: I1128 12:05:38.142025 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kpsr\" (UniqueName: \"kubernetes.io/projected/64a19be7-4e6b-43eb-9ebd-93a60054b661-kube-api-access-2kpsr\") pod \"64a19be7-4e6b-43eb-9ebd-93a60054b661\" (UID: \"64a19be7-4e6b-43eb-9ebd-93a60054b661\") " Nov 28 12:05:38 crc kubenswrapper[5030]: I1128 12:05:38.143337 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64a19be7-4e6b-43eb-9ebd-93a60054b661-bundle" (OuterVolumeSpecName: "bundle") pod "64a19be7-4e6b-43eb-9ebd-93a60054b661" (UID: "64a19be7-4e6b-43eb-9ebd-93a60054b661"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:05:38 crc kubenswrapper[5030]: I1128 12:05:38.149257 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a19be7-4e6b-43eb-9ebd-93a60054b661-kube-api-access-2kpsr" (OuterVolumeSpecName: "kube-api-access-2kpsr") pod "64a19be7-4e6b-43eb-9ebd-93a60054b661" (UID: "64a19be7-4e6b-43eb-9ebd-93a60054b661"). InnerVolumeSpecName "kube-api-access-2kpsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:05:38 crc kubenswrapper[5030]: I1128 12:05:38.243239 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kpsr\" (UniqueName: \"kubernetes.io/projected/64a19be7-4e6b-43eb-9ebd-93a60054b661-kube-api-access-2kpsr\") on node \"crc\" DevicePath \"\"" Nov 28 12:05:38 crc kubenswrapper[5030]: I1128 12:05:38.243268 5030 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64a19be7-4e6b-43eb-9ebd-93a60054b661-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:05:38 crc kubenswrapper[5030]: I1128 12:05:38.395230 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64a19be7-4e6b-43eb-9ebd-93a60054b661-util" (OuterVolumeSpecName: "util") pod "64a19be7-4e6b-43eb-9ebd-93a60054b661" (UID: "64a19be7-4e6b-43eb-9ebd-93a60054b661"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:05:38 crc kubenswrapper[5030]: I1128 12:05:38.446053 5030 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64a19be7-4e6b-43eb-9ebd-93a60054b661-util\") on node \"crc\" DevicePath \"\"" Nov 28 12:05:38 crc kubenswrapper[5030]: I1128 12:05:38.678372 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm" event={"ID":"64a19be7-4e6b-43eb-9ebd-93a60054b661","Type":"ContainerDied","Data":"b4a032d7cddcfe06ae60713bdc85021a3c43c88151d0dd9af114bd2b57152e13"} Nov 28 12:05:38 crc kubenswrapper[5030]: I1128 12:05:38.679708 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4a032d7cddcfe06ae60713bdc85021a3c43c88151d0dd9af114bd2b57152e13" Nov 28 12:05:38 crc kubenswrapper[5030]: I1128 12:05:38.680021 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm" Nov 28 12:05:38 crc kubenswrapper[5030]: I1128 12:05:38.683803 5030 generic.go:334] "Generic (PLEG): container finished" podID="080389f5-c012-4b73-aba3-895f3e179384" containerID="31d6558face6946428a738f2370dd1acf81153d3557a9ab7ab82472b6ba4ad90" exitCode=0 Nov 28 12:05:38 crc kubenswrapper[5030]: I1128 12:05:38.684025 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4556w" event={"ID":"080389f5-c012-4b73-aba3-895f3e179384","Type":"ContainerDied","Data":"31d6558face6946428a738f2370dd1acf81153d3557a9ab7ab82472b6ba4ad90"} Nov 28 12:05:39 crc kubenswrapper[5030]: I1128 12:05:39.693384 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4556w" event={"ID":"080389f5-c012-4b73-aba3-895f3e179384","Type":"ContainerStarted","Data":"7612b399d2368d8df555623d6186835dbe1d9e5ece363a454b1ad0308fe9bb76"} Nov 28 12:05:39 crc kubenswrapper[5030]: I1128 12:05:39.718714 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4556w" podStartSLOduration=1.9472518189999999 podStartE2EDuration="5.71868426s" podCreationTimestamp="2025-11-28 12:05:34 +0000 UTC" firstStartedPulling="2025-11-28 12:05:35.64708628 +0000 UTC m=+753.588828963" lastFinishedPulling="2025-11-28 12:05:39.418518721 +0000 UTC m=+757.360261404" observedRunningTime="2025-11-28 12:05:39.713032169 +0000 UTC m=+757.654774852" watchObservedRunningTime="2025-11-28 12:05:39.71868426 +0000 UTC m=+757.660426973" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.008598 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-56c7ff6859-5qpcg"] Nov 28 12:05:44 crc kubenswrapper[5030]: E1128 12:05:44.009291 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a19be7-4e6b-43eb-9ebd-93a60054b661" containerName="pull" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.009307 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a19be7-4e6b-43eb-9ebd-93a60054b661" containerName="pull" Nov 28 12:05:44 crc kubenswrapper[5030]: E1128 12:05:44.009317 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a19be7-4e6b-43eb-9ebd-93a60054b661" containerName="util" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.009322 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a19be7-4e6b-43eb-9ebd-93a60054b661" containerName="util" Nov 28 12:05:44 crc kubenswrapper[5030]: E1128 12:05:44.009335 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a19be7-4e6b-43eb-9ebd-93a60054b661" containerName="extract" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.009342 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a19be7-4e6b-43eb-9ebd-93a60054b661" containerName="extract" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.009435 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a19be7-4e6b-43eb-9ebd-93a60054b661" containerName="extract" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.009923 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-56c7ff6859-5qpcg" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.012192 5030 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.012294 5030 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-82hkw" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.013757 5030 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.013891 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.013984 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.035998 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-56c7ff6859-5qpcg"] Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.127656 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f5fae05-87fd-4703-8262-540cbff62263-apiservice-cert\") pod \"metallb-operator-controller-manager-56c7ff6859-5qpcg\" (UID: \"2f5fae05-87fd-4703-8262-540cbff62263\") " pod="metallb-system/metallb-operator-controller-manager-56c7ff6859-5qpcg" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.127723 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f5fae05-87fd-4703-8262-540cbff62263-webhook-cert\") pod \"metallb-operator-controller-manager-56c7ff6859-5qpcg\" (UID: \"2f5fae05-87fd-4703-8262-540cbff62263\") " pod="metallb-system/metallb-operator-controller-manager-56c7ff6859-5qpcg" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.127788 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbcpj\" (UniqueName: \"kubernetes.io/projected/2f5fae05-87fd-4703-8262-540cbff62263-kube-api-access-rbcpj\") pod \"metallb-operator-controller-manager-56c7ff6859-5qpcg\" (UID: \"2f5fae05-87fd-4703-8262-540cbff62263\") " pod="metallb-system/metallb-operator-controller-manager-56c7ff6859-5qpcg" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.229207 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f5fae05-87fd-4703-8262-540cbff62263-apiservice-cert\") pod \"metallb-operator-controller-manager-56c7ff6859-5qpcg\" (UID: \"2f5fae05-87fd-4703-8262-540cbff62263\") " pod="metallb-system/metallb-operator-controller-manager-56c7ff6859-5qpcg" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.229309 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f5fae05-87fd-4703-8262-540cbff62263-webhook-cert\") pod \"metallb-operator-controller-manager-56c7ff6859-5qpcg\" (UID: \"2f5fae05-87fd-4703-8262-540cbff62263\") " pod="metallb-system/metallb-operator-controller-manager-56c7ff6859-5qpcg" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.229371 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbcpj\" (UniqueName: \"kubernetes.io/projected/2f5fae05-87fd-4703-8262-540cbff62263-kube-api-access-rbcpj\") pod \"metallb-operator-controller-manager-56c7ff6859-5qpcg\" (UID: \"2f5fae05-87fd-4703-8262-540cbff62263\") " pod="metallb-system/metallb-operator-controller-manager-56c7ff6859-5qpcg" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.238095 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f5fae05-87fd-4703-8262-540cbff62263-apiservice-cert\") pod \"metallb-operator-controller-manager-56c7ff6859-5qpcg\" (UID: \"2f5fae05-87fd-4703-8262-540cbff62263\") " pod="metallb-system/metallb-operator-controller-manager-56c7ff6859-5qpcg" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.248604 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f5fae05-87fd-4703-8262-540cbff62263-webhook-cert\") pod \"metallb-operator-controller-manager-56c7ff6859-5qpcg\" (UID: \"2f5fae05-87fd-4703-8262-540cbff62263\") " pod="metallb-system/metallb-operator-controller-manager-56c7ff6859-5qpcg" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.248943 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbcpj\" (UniqueName: \"kubernetes.io/projected/2f5fae05-87fd-4703-8262-540cbff62263-kube-api-access-rbcpj\") pod \"metallb-operator-controller-manager-56c7ff6859-5qpcg\" (UID: \"2f5fae05-87fd-4703-8262-540cbff62263\") " pod="metallb-system/metallb-operator-controller-manager-56c7ff6859-5qpcg" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.326562 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-56c7ff6859-5qpcg" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.376938 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7c9d545dc4-92nd9"] Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.381743 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7c9d545dc4-92nd9" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.385912 5030 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.386062 5030 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.396898 5030 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-8zhn6" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.444891 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7c9d545dc4-92nd9"] Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.546421 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b56a2b73-e153-400a-9b6b-c7a20d9cbed6-apiservice-cert\") pod \"metallb-operator-webhook-server-7c9d545dc4-92nd9\" (UID: \"b56a2b73-e153-400a-9b6b-c7a20d9cbed6\") " pod="metallb-system/metallb-operator-webhook-server-7c9d545dc4-92nd9" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.546743 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftfxx\" (UniqueName: \"kubernetes.io/projected/b56a2b73-e153-400a-9b6b-c7a20d9cbed6-kube-api-access-ftfxx\") pod \"metallb-operator-webhook-server-7c9d545dc4-92nd9\" (UID: \"b56a2b73-e153-400a-9b6b-c7a20d9cbed6\") " pod="metallb-system/metallb-operator-webhook-server-7c9d545dc4-92nd9" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.546817 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b56a2b73-e153-400a-9b6b-c7a20d9cbed6-webhook-cert\") pod \"metallb-operator-webhook-server-7c9d545dc4-92nd9\" (UID: \"b56a2b73-e153-400a-9b6b-c7a20d9cbed6\") " pod="metallb-system/metallb-operator-webhook-server-7c9d545dc4-92nd9" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.658237 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b56a2b73-e153-400a-9b6b-c7a20d9cbed6-webhook-cert\") pod \"metallb-operator-webhook-server-7c9d545dc4-92nd9\" (UID: \"b56a2b73-e153-400a-9b6b-c7a20d9cbed6\") " pod="metallb-system/metallb-operator-webhook-server-7c9d545dc4-92nd9" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.658332 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b56a2b73-e153-400a-9b6b-c7a20d9cbed6-apiservice-cert\") pod \"metallb-operator-webhook-server-7c9d545dc4-92nd9\" (UID: \"b56a2b73-e153-400a-9b6b-c7a20d9cbed6\") " pod="metallb-system/metallb-operator-webhook-server-7c9d545dc4-92nd9" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.658366 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftfxx\" (UniqueName: \"kubernetes.io/projected/b56a2b73-e153-400a-9b6b-c7a20d9cbed6-kube-api-access-ftfxx\") pod \"metallb-operator-webhook-server-7c9d545dc4-92nd9\" (UID: \"b56a2b73-e153-400a-9b6b-c7a20d9cbed6\") " pod="metallb-system/metallb-operator-webhook-server-7c9d545dc4-92nd9" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.663951 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b56a2b73-e153-400a-9b6b-c7a20d9cbed6-webhook-cert\") pod \"metallb-operator-webhook-server-7c9d545dc4-92nd9\" (UID: \"b56a2b73-e153-400a-9b6b-c7a20d9cbed6\") " pod="metallb-system/metallb-operator-webhook-server-7c9d545dc4-92nd9" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.664021 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b56a2b73-e153-400a-9b6b-c7a20d9cbed6-apiservice-cert\") pod \"metallb-operator-webhook-server-7c9d545dc4-92nd9\" (UID: \"b56a2b73-e153-400a-9b6b-c7a20d9cbed6\") " pod="metallb-system/metallb-operator-webhook-server-7c9d545dc4-92nd9" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.687651 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftfxx\" (UniqueName: \"kubernetes.io/projected/b56a2b73-e153-400a-9b6b-c7a20d9cbed6-kube-api-access-ftfxx\") pod \"metallb-operator-webhook-server-7c9d545dc4-92nd9\" (UID: \"b56a2b73-e153-400a-9b6b-c7a20d9cbed6\") " pod="metallb-system/metallb-operator-webhook-server-7c9d545dc4-92nd9" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.698892 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7c9d545dc4-92nd9" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.729725 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-56c7ff6859-5qpcg"] Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.910052 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4556w" Nov 28 12:05:44 crc kubenswrapper[5030]: I1128 12:05:44.911100 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4556w" Nov 28 12:05:45 crc kubenswrapper[5030]: I1128 12:05:45.009979 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7c9d545dc4-92nd9"] Nov 28 12:05:45 crc kubenswrapper[5030]: I1128 12:05:45.731078 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-56c7ff6859-5qpcg" event={"ID":"2f5fae05-87fd-4703-8262-540cbff62263","Type":"ContainerStarted","Data":"c9f9cacc8772100627425f822276059ede46561f8fbbec69720f5cc377cc96b9"} Nov 28 12:05:45 crc kubenswrapper[5030]: I1128 12:05:45.733662 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7c9d545dc4-92nd9" event={"ID":"b56a2b73-e153-400a-9b6b-c7a20d9cbed6","Type":"ContainerStarted","Data":"83baa4c95d0c4dde6e9cb8aea334e065f5c775857834a7ebd76855b5a8a1f93e"} Nov 28 12:05:45 crc kubenswrapper[5030]: I1128 12:05:45.985696 5030 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4556w" podUID="080389f5-c012-4b73-aba3-895f3e179384" containerName="registry-server" probeResult="failure" output=< Nov 28 12:05:45 crc kubenswrapper[5030]: timeout: failed to connect service ":50051" within 1s Nov 28 12:05:45 crc kubenswrapper[5030]: > Nov 28 12:05:53 crc kubenswrapper[5030]: I1128 12:05:53.793794 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7c9d545dc4-92nd9" event={"ID":"b56a2b73-e153-400a-9b6b-c7a20d9cbed6","Type":"ContainerStarted","Data":"01a906a253ac318cdebf3c8d0a6afee111b8bca1211a865d78b66ca837e8378e"} Nov 28 12:05:53 crc kubenswrapper[5030]: I1128 12:05:53.795207 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7c9d545dc4-92nd9" Nov 28 12:05:53 crc kubenswrapper[5030]: I1128 12:05:53.796569 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-56c7ff6859-5qpcg" event={"ID":"2f5fae05-87fd-4703-8262-540cbff62263","Type":"ContainerStarted","Data":"434740ec8a6cd469645ed4fb20448cfe699891f134af5d9bfc13c24ebe78d872"} Nov 28 12:05:53 crc kubenswrapper[5030]: I1128 12:05:53.796750 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-56c7ff6859-5qpcg" Nov 28 12:05:53 crc kubenswrapper[5030]: I1128 12:05:53.854237 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7c9d545dc4-92nd9" podStartSLOduration=2.286817061 podStartE2EDuration="9.854209335s" podCreationTimestamp="2025-11-28 12:05:44 +0000 UTC" firstStartedPulling="2025-11-28 12:05:45.027896437 +0000 UTC m=+762.969639110" lastFinishedPulling="2025-11-28 12:05:52.595288691 +0000 UTC m=+770.537031384" observedRunningTime="2025-11-28 12:05:53.83294601 +0000 UTC m=+771.774688733" watchObservedRunningTime="2025-11-28 12:05:53.854209335 +0000 UTC m=+771.795952028" Nov 28 12:05:53 crc kubenswrapper[5030]: I1128 12:05:53.855384 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-56c7ff6859-5qpcg" podStartSLOduration=3.03022798 podStartE2EDuration="10.855375306s" podCreationTimestamp="2025-11-28 12:05:43 +0000 UTC" firstStartedPulling="2025-11-28 12:05:44.753365589 +0000 UTC m=+762.695108272" lastFinishedPulling="2025-11-28 12:05:52.578512905 +0000 UTC m=+770.520255598" observedRunningTime="2025-11-28 12:05:53.852545871 +0000 UTC m=+771.794288564" watchObservedRunningTime="2025-11-28 12:05:53.855375306 +0000 UTC m=+771.797117999" Nov 28 12:05:54 crc kubenswrapper[5030]: I1128 12:05:54.997118 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4556w" Nov 28 12:05:55 crc kubenswrapper[5030]: I1128 12:05:55.052833 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4556w" Nov 28 12:05:55 crc kubenswrapper[5030]: I1128 12:05:55.250548 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4556w"] Nov 28 12:05:56 crc kubenswrapper[5030]: I1128 12:05:56.818600 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4556w" podUID="080389f5-c012-4b73-aba3-895f3e179384" containerName="registry-server" containerID="cri-o://7612b399d2368d8df555623d6186835dbe1d9e5ece363a454b1ad0308fe9bb76" gracePeriod=2 Nov 28 12:05:57 crc kubenswrapper[5030]: I1128 12:05:57.791727 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4556w" Nov 28 12:05:57 crc kubenswrapper[5030]: I1128 12:05:57.832904 5030 generic.go:334] "Generic (PLEG): container finished" podID="080389f5-c012-4b73-aba3-895f3e179384" containerID="7612b399d2368d8df555623d6186835dbe1d9e5ece363a454b1ad0308fe9bb76" exitCode=0 Nov 28 12:05:57 crc kubenswrapper[5030]: I1128 12:05:57.832964 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4556w" Nov 28 12:05:57 crc kubenswrapper[5030]: I1128 12:05:57.832976 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4556w" event={"ID":"080389f5-c012-4b73-aba3-895f3e179384","Type":"ContainerDied","Data":"7612b399d2368d8df555623d6186835dbe1d9e5ece363a454b1ad0308fe9bb76"} Nov 28 12:05:57 crc kubenswrapper[5030]: I1128 12:05:57.834345 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4556w" event={"ID":"080389f5-c012-4b73-aba3-895f3e179384","Type":"ContainerDied","Data":"aa8ea97b804a4698f2fa7b398fac0b8dbefef295eadc50214955859508012b54"} Nov 28 12:05:57 crc kubenswrapper[5030]: I1128 12:05:57.834380 5030 scope.go:117] "RemoveContainer" containerID="7612b399d2368d8df555623d6186835dbe1d9e5ece363a454b1ad0308fe9bb76" Nov 28 12:05:57 crc kubenswrapper[5030]: I1128 12:05:57.858422 5030 scope.go:117] "RemoveContainer" containerID="31d6558face6946428a738f2370dd1acf81153d3557a9ab7ab82472b6ba4ad90" Nov 28 12:05:57 crc kubenswrapper[5030]: I1128 12:05:57.881897 5030 scope.go:117] "RemoveContainer" containerID="d3389220d047eaa925fde478a216feb89b03393a800926c7e22533642c38ed12" Nov 28 12:05:57 crc kubenswrapper[5030]: I1128 12:05:57.901758 5030 scope.go:117] "RemoveContainer" containerID="7612b399d2368d8df555623d6186835dbe1d9e5ece363a454b1ad0308fe9bb76" Nov 28 12:05:57 crc kubenswrapper[5030]: E1128 12:05:57.902351 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7612b399d2368d8df555623d6186835dbe1d9e5ece363a454b1ad0308fe9bb76\": container with ID starting with 7612b399d2368d8df555623d6186835dbe1d9e5ece363a454b1ad0308fe9bb76 not found: ID does not exist" containerID="7612b399d2368d8df555623d6186835dbe1d9e5ece363a454b1ad0308fe9bb76" Nov 28 12:05:57 crc kubenswrapper[5030]: I1128 12:05:57.902395 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7612b399d2368d8df555623d6186835dbe1d9e5ece363a454b1ad0308fe9bb76"} err="failed to get container status \"7612b399d2368d8df555623d6186835dbe1d9e5ece363a454b1ad0308fe9bb76\": rpc error: code = NotFound desc = could not find container \"7612b399d2368d8df555623d6186835dbe1d9e5ece363a454b1ad0308fe9bb76\": container with ID starting with 7612b399d2368d8df555623d6186835dbe1d9e5ece363a454b1ad0308fe9bb76 not found: ID does not exist" Nov 28 12:05:57 crc kubenswrapper[5030]: I1128 12:05:57.902435 5030 scope.go:117] "RemoveContainer" containerID="31d6558face6946428a738f2370dd1acf81153d3557a9ab7ab82472b6ba4ad90" Nov 28 12:05:57 crc kubenswrapper[5030]: E1128 12:05:57.903314 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31d6558face6946428a738f2370dd1acf81153d3557a9ab7ab82472b6ba4ad90\": container with ID starting with 31d6558face6946428a738f2370dd1acf81153d3557a9ab7ab82472b6ba4ad90 not found: ID does not exist" containerID="31d6558face6946428a738f2370dd1acf81153d3557a9ab7ab82472b6ba4ad90" Nov 28 12:05:57 crc kubenswrapper[5030]: I1128 12:05:57.903346 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31d6558face6946428a738f2370dd1acf81153d3557a9ab7ab82472b6ba4ad90"} err="failed to get container status \"31d6558face6946428a738f2370dd1acf81153d3557a9ab7ab82472b6ba4ad90\": rpc error: code = NotFound desc = could not find container \"31d6558face6946428a738f2370dd1acf81153d3557a9ab7ab82472b6ba4ad90\": container with ID starting with 31d6558face6946428a738f2370dd1acf81153d3557a9ab7ab82472b6ba4ad90 not found: ID does not exist" Nov 28 12:05:57 crc kubenswrapper[5030]: I1128 12:05:57.903360 5030 scope.go:117] "RemoveContainer" containerID="d3389220d047eaa925fde478a216feb89b03393a800926c7e22533642c38ed12" Nov 28 12:05:57 crc kubenswrapper[5030]: E1128 12:05:57.903745 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3389220d047eaa925fde478a216feb89b03393a800926c7e22533642c38ed12\": container with ID starting with d3389220d047eaa925fde478a216feb89b03393a800926c7e22533642c38ed12 not found: ID does not exist" containerID="d3389220d047eaa925fde478a216feb89b03393a800926c7e22533642c38ed12" Nov 28 12:05:57 crc kubenswrapper[5030]: I1128 12:05:57.903762 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3389220d047eaa925fde478a216feb89b03393a800926c7e22533642c38ed12"} err="failed to get container status \"d3389220d047eaa925fde478a216feb89b03393a800926c7e22533642c38ed12\": rpc error: code = NotFound desc = could not find container \"d3389220d047eaa925fde478a216feb89b03393a800926c7e22533642c38ed12\": container with ID starting with d3389220d047eaa925fde478a216feb89b03393a800926c7e22533642c38ed12 not found: ID does not exist" Nov 28 12:05:57 crc kubenswrapper[5030]: I1128 12:05:57.949736 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcknr\" (UniqueName: \"kubernetes.io/projected/080389f5-c012-4b73-aba3-895f3e179384-kube-api-access-zcknr\") pod \"080389f5-c012-4b73-aba3-895f3e179384\" (UID: \"080389f5-c012-4b73-aba3-895f3e179384\") " Nov 28 12:05:57 crc kubenswrapper[5030]: I1128 12:05:57.949842 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080389f5-c012-4b73-aba3-895f3e179384-catalog-content\") pod \"080389f5-c012-4b73-aba3-895f3e179384\" (UID: \"080389f5-c012-4b73-aba3-895f3e179384\") " Nov 28 12:05:57 crc kubenswrapper[5030]: I1128 12:05:57.949956 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080389f5-c012-4b73-aba3-895f3e179384-utilities\") pod \"080389f5-c012-4b73-aba3-895f3e179384\" (UID: \"080389f5-c012-4b73-aba3-895f3e179384\") " Nov 28 12:05:57 crc kubenswrapper[5030]: I1128 12:05:57.951373 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/080389f5-c012-4b73-aba3-895f3e179384-utilities" (OuterVolumeSpecName: "utilities") pod "080389f5-c012-4b73-aba3-895f3e179384" (UID: "080389f5-c012-4b73-aba3-895f3e179384"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:05:57 crc kubenswrapper[5030]: I1128 12:05:57.960766 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/080389f5-c012-4b73-aba3-895f3e179384-kube-api-access-zcknr" (OuterVolumeSpecName: "kube-api-access-zcknr") pod "080389f5-c012-4b73-aba3-895f3e179384" (UID: "080389f5-c012-4b73-aba3-895f3e179384"). InnerVolumeSpecName "kube-api-access-zcknr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:05:58 crc kubenswrapper[5030]: I1128 12:05:58.052074 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcknr\" (UniqueName: \"kubernetes.io/projected/080389f5-c012-4b73-aba3-895f3e179384-kube-api-access-zcknr\") on node \"crc\" DevicePath \"\"" Nov 28 12:05:58 crc kubenswrapper[5030]: I1128 12:05:58.052133 5030 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080389f5-c012-4b73-aba3-895f3e179384-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:05:58 crc kubenswrapper[5030]: I1128 12:05:58.064015 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/080389f5-c012-4b73-aba3-895f3e179384-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "080389f5-c012-4b73-aba3-895f3e179384" (UID: "080389f5-c012-4b73-aba3-895f3e179384"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:05:58 crc kubenswrapper[5030]: I1128 12:05:58.154203 5030 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080389f5-c012-4b73-aba3-895f3e179384-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:05:58 crc kubenswrapper[5030]: I1128 12:05:58.167432 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4556w"] Nov 28 12:05:58 crc kubenswrapper[5030]: I1128 12:05:58.173895 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4556w"] Nov 28 12:05:58 crc kubenswrapper[5030]: I1128 12:05:58.400340 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="080389f5-c012-4b73-aba3-895f3e179384" path="/var/lib/kubelet/pods/080389f5-c012-4b73-aba3-895f3e179384/volumes" Nov 28 12:06:03 crc kubenswrapper[5030]: I1128 12:06:03.202257 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:06:03 crc kubenswrapper[5030]: I1128 12:06:03.202392 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:06:03 crc kubenswrapper[5030]: I1128 12:06:03.202536 5030 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 12:06:03 crc kubenswrapper[5030]: I1128 12:06:03.203601 5030 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d6b24c1331357c81e9c3721fca85bfc8df7a48f3286c0b8748f4a82dbcaa4eb"} pod="openshift-machine-config-operator/machine-config-daemon-cqr62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 12:06:03 crc kubenswrapper[5030]: I1128 12:06:03.203714 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" containerID="cri-o://1d6b24c1331357c81e9c3721fca85bfc8df7a48f3286c0b8748f4a82dbcaa4eb" gracePeriod=600 Nov 28 12:06:03 crc kubenswrapper[5030]: I1128 12:06:03.880514 5030 generic.go:334] "Generic (PLEG): container finished" podID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerID="1d6b24c1331357c81e9c3721fca85bfc8df7a48f3286c0b8748f4a82dbcaa4eb" exitCode=0 Nov 28 12:06:03 crc kubenswrapper[5030]: I1128 12:06:03.880525 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerDied","Data":"1d6b24c1331357c81e9c3721fca85bfc8df7a48f3286c0b8748f4a82dbcaa4eb"} Nov 28 12:06:03 crc kubenswrapper[5030]: I1128 12:06:03.881148 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerStarted","Data":"440c69d6f2693ab24ec11da83e2b2b49568d8223dcdef3effa26def3f51975e3"} Nov 28 12:06:03 crc kubenswrapper[5030]: I1128 12:06:03.881225 5030 scope.go:117] "RemoveContainer" containerID="8c68d424e56207f9bef5dcba47aa3662682bfcc69409b93749507c45f6366456" Nov 28 12:06:04 crc kubenswrapper[5030]: I1128 12:06:04.704860 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7c9d545dc4-92nd9" Nov 28 12:06:24 crc kubenswrapper[5030]: I1128 12:06:24.331296 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-56c7ff6859-5qpcg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.247750 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-vg7xg"] Nov 28 12:06:25 crc kubenswrapper[5030]: E1128 12:06:25.248052 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080389f5-c012-4b73-aba3-895f3e179384" containerName="registry-server" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.248068 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="080389f5-c012-4b73-aba3-895f3e179384" containerName="registry-server" Nov 28 12:06:25 crc kubenswrapper[5030]: E1128 12:06:25.248085 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080389f5-c012-4b73-aba3-895f3e179384" containerName="extract-content" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.248092 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="080389f5-c012-4b73-aba3-895f3e179384" containerName="extract-content" Nov 28 12:06:25 crc kubenswrapper[5030]: E1128 12:06:25.248106 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080389f5-c012-4b73-aba3-895f3e179384" containerName="extract-utilities" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.248112 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="080389f5-c012-4b73-aba3-895f3e179384" containerName="extract-utilities" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.248219 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="080389f5-c012-4b73-aba3-895f3e179384" containerName="registry-server" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.254063 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.257389 5030 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.260554 5030 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-kcfrc" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.261015 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.270197 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-m8487"] Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.271760 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m8487" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.275877 5030 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.283006 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-m8487"] Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.313216 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6t24\" (UniqueName: \"kubernetes.io/projected/1f4ef950-494c-4d87-8886-1386f04a3970-kube-api-access-w6t24\") pod \"frr-k8s-webhook-server-7fcb986d4-m8487\" (UID: \"1f4ef950-494c-4d87-8886-1386f04a3970\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m8487" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.313286 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f4ef950-494c-4d87-8886-1386f04a3970-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-m8487\" (UID: \"1f4ef950-494c-4d87-8886-1386f04a3970\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m8487" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.313317 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/032fe48e-074f-4471-80f1-940c9a22e1b3-reloader\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.313351 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/032fe48e-074f-4471-80f1-940c9a22e1b3-frr-sockets\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.313382 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/032fe48e-074f-4471-80f1-940c9a22e1b3-metrics-certs\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.313407 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/032fe48e-074f-4471-80f1-940c9a22e1b3-frr-conf\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.313523 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/032fe48e-074f-4471-80f1-940c9a22e1b3-frr-startup\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.313556 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/032fe48e-074f-4471-80f1-940c9a22e1b3-metrics\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.313577 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jxn7\" (UniqueName: \"kubernetes.io/projected/032fe48e-074f-4471-80f1-940c9a22e1b3-kube-api-access-7jxn7\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.355232 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-gr75f"] Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.356113 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gr75f" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.360615 5030 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.362826 5030 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-8tkz5" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.362835 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.363090 5030 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.383265 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-h6zgg"] Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.384417 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-h6zgg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.387944 5030 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.403994 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-h6zgg"] Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.414716 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be02d333-c255-4eae-91d6-14dff16fd95f-cert\") pod \"controller-f8648f98b-h6zgg\" (UID: \"be02d333-c255-4eae-91d6-14dff16fd95f\") " pod="metallb-system/controller-f8648f98b-h6zgg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.414766 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/032fe48e-074f-4471-80f1-940c9a22e1b3-metrics\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.414802 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jxn7\" (UniqueName: \"kubernetes.io/projected/032fe48e-074f-4471-80f1-940c9a22e1b3-kube-api-access-7jxn7\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.414827 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6t24\" (UniqueName: \"kubernetes.io/projected/1f4ef950-494c-4d87-8886-1386f04a3970-kube-api-access-w6t24\") pod \"frr-k8s-webhook-server-7fcb986d4-m8487\" (UID: \"1f4ef950-494c-4d87-8886-1386f04a3970\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m8487" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.414851 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e4331617-c99d-4b39-a50e-004035983d31-memberlist\") pod \"speaker-gr75f\" (UID: \"e4331617-c99d-4b39-a50e-004035983d31\") " pod="metallb-system/speaker-gr75f" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.414878 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f4ef950-494c-4d87-8886-1386f04a3970-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-m8487\" (UID: \"1f4ef950-494c-4d87-8886-1386f04a3970\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m8487" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.414900 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/032fe48e-074f-4471-80f1-940c9a22e1b3-reloader\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.414931 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/032fe48e-074f-4471-80f1-940c9a22e1b3-frr-sockets\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.414950 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e4331617-c99d-4b39-a50e-004035983d31-metallb-excludel2\") pod \"speaker-gr75f\" (UID: \"e4331617-c99d-4b39-a50e-004035983d31\") " pod="metallb-system/speaker-gr75f" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.414971 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx5bp\" (UniqueName: \"kubernetes.io/projected/be02d333-c255-4eae-91d6-14dff16fd95f-kube-api-access-tx5bp\") pod \"controller-f8648f98b-h6zgg\" (UID: \"be02d333-c255-4eae-91d6-14dff16fd95f\") " pod="metallb-system/controller-f8648f98b-h6zgg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.414993 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/032fe48e-074f-4471-80f1-940c9a22e1b3-metrics-certs\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.415009 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be02d333-c255-4eae-91d6-14dff16fd95f-metrics-certs\") pod \"controller-f8648f98b-h6zgg\" (UID: \"be02d333-c255-4eae-91d6-14dff16fd95f\") " pod="metallb-system/controller-f8648f98b-h6zgg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.415027 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g5zk\" (UniqueName: \"kubernetes.io/projected/e4331617-c99d-4b39-a50e-004035983d31-kube-api-access-6g5zk\") pod \"speaker-gr75f\" (UID: \"e4331617-c99d-4b39-a50e-004035983d31\") " pod="metallb-system/speaker-gr75f" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.415048 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/032fe48e-074f-4471-80f1-940c9a22e1b3-frr-conf\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.415067 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4331617-c99d-4b39-a50e-004035983d31-metrics-certs\") pod \"speaker-gr75f\" (UID: \"e4331617-c99d-4b39-a50e-004035983d31\") " pod="metallb-system/speaker-gr75f" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.415091 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/032fe48e-074f-4471-80f1-940c9a22e1b3-frr-startup\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.415360 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/032fe48e-074f-4471-80f1-940c9a22e1b3-metrics\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.419490 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/032fe48e-074f-4471-80f1-940c9a22e1b3-frr-startup\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.419629 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/032fe48e-074f-4471-80f1-940c9a22e1b3-reloader\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.421089 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/032fe48e-074f-4471-80f1-940c9a22e1b3-frr-sockets\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.421265 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/032fe48e-074f-4471-80f1-940c9a22e1b3-frr-conf\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.426397 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/032fe48e-074f-4471-80f1-940c9a22e1b3-metrics-certs\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.426639 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f4ef950-494c-4d87-8886-1386f04a3970-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-m8487\" (UID: \"1f4ef950-494c-4d87-8886-1386f04a3970\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m8487" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.441043 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jxn7\" (UniqueName: \"kubernetes.io/projected/032fe48e-074f-4471-80f1-940c9a22e1b3-kube-api-access-7jxn7\") pod \"frr-k8s-vg7xg\" (UID: \"032fe48e-074f-4471-80f1-940c9a22e1b3\") " pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.444738 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6t24\" (UniqueName: \"kubernetes.io/projected/1f4ef950-494c-4d87-8886-1386f04a3970-kube-api-access-w6t24\") pod \"frr-k8s-webhook-server-7fcb986d4-m8487\" (UID: \"1f4ef950-494c-4d87-8886-1386f04a3970\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m8487" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.516452 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e4331617-c99d-4b39-a50e-004035983d31-metallb-excludel2\") pod \"speaker-gr75f\" (UID: \"e4331617-c99d-4b39-a50e-004035983d31\") " pod="metallb-system/speaker-gr75f" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.516538 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx5bp\" (UniqueName: \"kubernetes.io/projected/be02d333-c255-4eae-91d6-14dff16fd95f-kube-api-access-tx5bp\") pod \"controller-f8648f98b-h6zgg\" (UID: \"be02d333-c255-4eae-91d6-14dff16fd95f\") " pod="metallb-system/controller-f8648f98b-h6zgg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.516567 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be02d333-c255-4eae-91d6-14dff16fd95f-metrics-certs\") pod \"controller-f8648f98b-h6zgg\" (UID: \"be02d333-c255-4eae-91d6-14dff16fd95f\") " pod="metallb-system/controller-f8648f98b-h6zgg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.516584 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g5zk\" (UniqueName: \"kubernetes.io/projected/e4331617-c99d-4b39-a50e-004035983d31-kube-api-access-6g5zk\") pod \"speaker-gr75f\" (UID: \"e4331617-c99d-4b39-a50e-004035983d31\") " pod="metallb-system/speaker-gr75f" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.516603 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4331617-c99d-4b39-a50e-004035983d31-metrics-certs\") pod \"speaker-gr75f\" (UID: \"e4331617-c99d-4b39-a50e-004035983d31\") " pod="metallb-system/speaker-gr75f" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.516645 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be02d333-c255-4eae-91d6-14dff16fd95f-cert\") pod \"controller-f8648f98b-h6zgg\" (UID: \"be02d333-c255-4eae-91d6-14dff16fd95f\") " pod="metallb-system/controller-f8648f98b-h6zgg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.516677 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e4331617-c99d-4b39-a50e-004035983d31-memberlist\") pod \"speaker-gr75f\" (UID: \"e4331617-c99d-4b39-a50e-004035983d31\") " pod="metallb-system/speaker-gr75f" Nov 28 12:06:25 crc kubenswrapper[5030]: E1128 12:06:25.517926 5030 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 28 12:06:25 crc kubenswrapper[5030]: E1128 12:06:25.518026 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4331617-c99d-4b39-a50e-004035983d31-metrics-certs podName:e4331617-c99d-4b39-a50e-004035983d31 nodeName:}" failed. No retries permitted until 2025-11-28 12:06:26.017990569 +0000 UTC m=+803.959733252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e4331617-c99d-4b39-a50e-004035983d31-metrics-certs") pod "speaker-gr75f" (UID: "e4331617-c99d-4b39-a50e-004035983d31") : secret "speaker-certs-secret" not found Nov 28 12:06:25 crc kubenswrapper[5030]: E1128 12:06:25.518152 5030 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 28 12:06:25 crc kubenswrapper[5030]: E1128 12:06:25.518282 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4331617-c99d-4b39-a50e-004035983d31-memberlist podName:e4331617-c99d-4b39-a50e-004035983d31 nodeName:}" failed. No retries permitted until 2025-11-28 12:06:26.018246255 +0000 UTC m=+803.959988938 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e4331617-c99d-4b39-a50e-004035983d31-memberlist") pod "speaker-gr75f" (UID: "e4331617-c99d-4b39-a50e-004035983d31") : secret "metallb-memberlist" not found Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.518707 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e4331617-c99d-4b39-a50e-004035983d31-metallb-excludel2\") pod \"speaker-gr75f\" (UID: \"e4331617-c99d-4b39-a50e-004035983d31\") " pod="metallb-system/speaker-gr75f" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.520913 5030 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.521606 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be02d333-c255-4eae-91d6-14dff16fd95f-metrics-certs\") pod \"controller-f8648f98b-h6zgg\" (UID: \"be02d333-c255-4eae-91d6-14dff16fd95f\") " pod="metallb-system/controller-f8648f98b-h6zgg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.531409 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be02d333-c255-4eae-91d6-14dff16fd95f-cert\") pod \"controller-f8648f98b-h6zgg\" (UID: \"be02d333-c255-4eae-91d6-14dff16fd95f\") " pod="metallb-system/controller-f8648f98b-h6zgg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.536396 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g5zk\" (UniqueName: \"kubernetes.io/projected/e4331617-c99d-4b39-a50e-004035983d31-kube-api-access-6g5zk\") pod \"speaker-gr75f\" (UID: \"e4331617-c99d-4b39-a50e-004035983d31\") " pod="metallb-system/speaker-gr75f" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.538741 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx5bp\" (UniqueName: \"kubernetes.io/projected/be02d333-c255-4eae-91d6-14dff16fd95f-kube-api-access-tx5bp\") pod \"controller-f8648f98b-h6zgg\" (UID: \"be02d333-c255-4eae-91d6-14dff16fd95f\") " pod="metallb-system/controller-f8648f98b-h6zgg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.572199 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.592850 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m8487" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.702195 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-h6zgg" Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.818891 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-m8487"] Nov 28 12:06:25 crc kubenswrapper[5030]: W1128 12:06:25.826297 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f4ef950_494c_4d87_8886_1386f04a3970.slice/crio-60ab8d24b64ac6f451ed89026a425e21e3b24375722f798a9933be2fba47b30f WatchSource:0}: Error finding container 60ab8d24b64ac6f451ed89026a425e21e3b24375722f798a9933be2fba47b30f: Status 404 returned error can't find the container with id 60ab8d24b64ac6f451ed89026a425e21e3b24375722f798a9933be2fba47b30f Nov 28 12:06:25 crc kubenswrapper[5030]: I1128 12:06:25.921554 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-h6zgg"] Nov 28 12:06:25 crc kubenswrapper[5030]: W1128 12:06:25.926948 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe02d333_c255_4eae_91d6_14dff16fd95f.slice/crio-722c9a7b5009974acd7c5909cc476f3a6f9479b856142628da31fbd64ed5bc12 WatchSource:0}: Error finding container 722c9a7b5009974acd7c5909cc476f3a6f9479b856142628da31fbd64ed5bc12: Status 404 returned error can't find the container with id 722c9a7b5009974acd7c5909cc476f3a6f9479b856142628da31fbd64ed5bc12 Nov 28 12:06:26 crc kubenswrapper[5030]: I1128 12:06:26.024256 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4331617-c99d-4b39-a50e-004035983d31-metrics-certs\") pod \"speaker-gr75f\" (UID: \"e4331617-c99d-4b39-a50e-004035983d31\") " pod="metallb-system/speaker-gr75f" Nov 28 12:06:26 crc kubenswrapper[5030]: I1128 12:06:26.024346 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e4331617-c99d-4b39-a50e-004035983d31-memberlist\") pod \"speaker-gr75f\" (UID: \"e4331617-c99d-4b39-a50e-004035983d31\") " pod="metallb-system/speaker-gr75f" Nov 28 12:06:26 crc kubenswrapper[5030]: E1128 12:06:26.024497 5030 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 28 12:06:26 crc kubenswrapper[5030]: E1128 12:06:26.024586 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4331617-c99d-4b39-a50e-004035983d31-memberlist podName:e4331617-c99d-4b39-a50e-004035983d31 nodeName:}" failed. No retries permitted until 2025-11-28 12:06:27.024560494 +0000 UTC m=+804.966303177 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e4331617-c99d-4b39-a50e-004035983d31-memberlist") pod "speaker-gr75f" (UID: "e4331617-c99d-4b39-a50e-004035983d31") : secret "metallb-memberlist" not found Nov 28 12:06:26 crc kubenswrapper[5030]: I1128 12:06:26.033488 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4331617-c99d-4b39-a50e-004035983d31-metrics-certs\") pod \"speaker-gr75f\" (UID: \"e4331617-c99d-4b39-a50e-004035983d31\") " pod="metallb-system/speaker-gr75f" Nov 28 12:06:26 crc kubenswrapper[5030]: I1128 12:06:26.056348 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m8487" event={"ID":"1f4ef950-494c-4d87-8886-1386f04a3970","Type":"ContainerStarted","Data":"60ab8d24b64ac6f451ed89026a425e21e3b24375722f798a9933be2fba47b30f"} Nov 28 12:06:26 crc kubenswrapper[5030]: I1128 12:06:26.058205 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-h6zgg" event={"ID":"be02d333-c255-4eae-91d6-14dff16fd95f","Type":"ContainerStarted","Data":"722c9a7b5009974acd7c5909cc476f3a6f9479b856142628da31fbd64ed5bc12"} Nov 28 12:06:27 crc kubenswrapper[5030]: I1128 12:06:27.041672 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e4331617-c99d-4b39-a50e-004035983d31-memberlist\") pod \"speaker-gr75f\" (UID: \"e4331617-c99d-4b39-a50e-004035983d31\") " pod="metallb-system/speaker-gr75f" Nov 28 12:06:27 crc kubenswrapper[5030]: E1128 12:06:27.041938 5030 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 28 12:06:27 crc kubenswrapper[5030]: E1128 12:06:27.042053 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4331617-c99d-4b39-a50e-004035983d31-memberlist podName:e4331617-c99d-4b39-a50e-004035983d31 nodeName:}" failed. No retries permitted until 2025-11-28 12:06:29.042026671 +0000 UTC m=+806.983769354 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e4331617-c99d-4b39-a50e-004035983d31-memberlist") pod "speaker-gr75f" (UID: "e4331617-c99d-4b39-a50e-004035983d31") : secret "metallb-memberlist" not found Nov 28 12:06:27 crc kubenswrapper[5030]: I1128 12:06:27.066101 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vg7xg" event={"ID":"032fe48e-074f-4471-80f1-940c9a22e1b3","Type":"ContainerStarted","Data":"d7dff4d247aa8b9d885848b81c479907239f28827c6d3cca37968984989af1cf"} Nov 28 12:06:28 crc kubenswrapper[5030]: I1128 12:06:28.073716 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-h6zgg" event={"ID":"be02d333-c255-4eae-91d6-14dff16fd95f","Type":"ContainerStarted","Data":"8b7de30bfc7607f257ce2e3ea6d120c88a8a7ee1dbd0c5b54b4fe139b9811c95"} Nov 28 12:06:29 crc kubenswrapper[5030]: I1128 12:06:29.072830 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e4331617-c99d-4b39-a50e-004035983d31-memberlist\") pod \"speaker-gr75f\" (UID: \"e4331617-c99d-4b39-a50e-004035983d31\") " pod="metallb-system/speaker-gr75f" Nov 28 12:06:29 crc kubenswrapper[5030]: I1128 12:06:29.080840 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e4331617-c99d-4b39-a50e-004035983d31-memberlist\") pod \"speaker-gr75f\" (UID: \"e4331617-c99d-4b39-a50e-004035983d31\") " pod="metallb-system/speaker-gr75f" Nov 28 12:06:29 crc kubenswrapper[5030]: I1128 12:06:29.270931 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gr75f" Nov 28 12:06:30 crc kubenswrapper[5030]: I1128 12:06:30.096122 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gr75f" event={"ID":"e4331617-c99d-4b39-a50e-004035983d31","Type":"ContainerStarted","Data":"4ee0c4bcdb50397d902722ca9df823440f881a7593a38c594c79097553a66a08"} Nov 28 12:06:30 crc kubenswrapper[5030]: I1128 12:06:30.096234 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gr75f" event={"ID":"e4331617-c99d-4b39-a50e-004035983d31","Type":"ContainerStarted","Data":"1c7889859be57ef7dd7c478a9dce8aa12d079af408cef7ca6e7d604d0bb607fa"} Nov 28 12:06:35 crc kubenswrapper[5030]: I1128 12:06:35.132845 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m8487" event={"ID":"1f4ef950-494c-4d87-8886-1386f04a3970","Type":"ContainerStarted","Data":"832f86923e1daeaa21f7a0cabc1624305b00bb9dbb9ceeb66a59f24697d07835"} Nov 28 12:06:35 crc kubenswrapper[5030]: I1128 12:06:35.133816 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m8487" Nov 28 12:06:35 crc kubenswrapper[5030]: I1128 12:06:35.137060 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gr75f" event={"ID":"e4331617-c99d-4b39-a50e-004035983d31","Type":"ContainerStarted","Data":"4dda6ff7e095f08455310b5de8fe21a0db82b16e2c13aeaa8af2c7ca33d7782c"} Nov 28 12:06:35 crc kubenswrapper[5030]: I1128 12:06:35.137198 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-gr75f" Nov 28 12:06:35 crc kubenswrapper[5030]: I1128 12:06:35.138902 5030 generic.go:334] "Generic (PLEG): container finished" podID="032fe48e-074f-4471-80f1-940c9a22e1b3" containerID="abfb7b100d2f25fcbd560f531aadbc4cb8c7e4416b4e7059e0b3524758518666" exitCode=0 Nov 28 12:06:35 crc kubenswrapper[5030]: I1128 12:06:35.139020 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vg7xg" event={"ID":"032fe48e-074f-4471-80f1-940c9a22e1b3","Type":"ContainerDied","Data":"abfb7b100d2f25fcbd560f531aadbc4cb8c7e4416b4e7059e0b3524758518666"} Nov 28 12:06:35 crc kubenswrapper[5030]: I1128 12:06:35.140844 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-h6zgg" event={"ID":"be02d333-c255-4eae-91d6-14dff16fd95f","Type":"ContainerStarted","Data":"9e88f66440785444e253eb975dcc310a8b27a8da3849bcf268b76060cc474d25"} Nov 28 12:06:35 crc kubenswrapper[5030]: I1128 12:06:35.141073 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-h6zgg" Nov 28 12:06:35 crc kubenswrapper[5030]: I1128 12:06:35.163038 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m8487" podStartSLOduration=1.834999649 podStartE2EDuration="10.163013548s" podCreationTimestamp="2025-11-28 12:06:25 +0000 UTC" firstStartedPulling="2025-11-28 12:06:25.8289886 +0000 UTC m=+803.770731283" lastFinishedPulling="2025-11-28 12:06:34.157002509 +0000 UTC m=+812.098745182" observedRunningTime="2025-11-28 12:06:35.150695435 +0000 UTC m=+813.092438118" watchObservedRunningTime="2025-11-28 12:06:35.163013548 +0000 UTC m=+813.104756231" Nov 28 12:06:35 crc kubenswrapper[5030]: I1128 12:06:35.184653 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-h6zgg" podStartSLOduration=3.273720757 podStartE2EDuration="10.184634421s" podCreationTimestamp="2025-11-28 12:06:25 +0000 UTC" firstStartedPulling="2025-11-28 12:06:27.142885356 +0000 UTC m=+805.084628059" lastFinishedPulling="2025-11-28 12:06:34.05379904 +0000 UTC m=+811.995541723" observedRunningTime="2025-11-28 12:06:35.178991139 +0000 UTC m=+813.120733812" watchObservedRunningTime="2025-11-28 12:06:35.184634421 +0000 UTC m=+813.126377104" Nov 28 12:06:35 crc kubenswrapper[5030]: I1128 12:06:35.212628 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-gr75f" podStartSLOduration=5.841729334 podStartE2EDuration="10.212599007s" podCreationTimestamp="2025-11-28 12:06:25 +0000 UTC" firstStartedPulling="2025-11-28 12:06:29.68485238 +0000 UTC m=+807.626595093" lastFinishedPulling="2025-11-28 12:06:34.055722083 +0000 UTC m=+811.997464766" observedRunningTime="2025-11-28 12:06:35.210914862 +0000 UTC m=+813.152657545" watchObservedRunningTime="2025-11-28 12:06:35.212599007 +0000 UTC m=+813.154341700" Nov 28 12:06:36 crc kubenswrapper[5030]: I1128 12:06:36.152102 5030 generic.go:334] "Generic (PLEG): container finished" podID="032fe48e-074f-4471-80f1-940c9a22e1b3" containerID="03b0134439b402406bdcb7f1d358ad1f7a45c384a318031147fa21fe8f7afaad" exitCode=0 Nov 28 12:06:36 crc kubenswrapper[5030]: I1128 12:06:36.152373 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vg7xg" event={"ID":"032fe48e-074f-4471-80f1-940c9a22e1b3","Type":"ContainerDied","Data":"03b0134439b402406bdcb7f1d358ad1f7a45c384a318031147fa21fe8f7afaad"} Nov 28 12:06:37 crc kubenswrapper[5030]: I1128 12:06:37.164674 5030 generic.go:334] "Generic (PLEG): container finished" podID="032fe48e-074f-4471-80f1-940c9a22e1b3" containerID="92289552cc70fd5e09363ff5709e25f99553cb578bbe5c0adc89a0027927a0eb" exitCode=0 Nov 28 12:06:37 crc kubenswrapper[5030]: I1128 12:06:37.164767 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vg7xg" event={"ID":"032fe48e-074f-4471-80f1-940c9a22e1b3","Type":"ContainerDied","Data":"92289552cc70fd5e09363ff5709e25f99553cb578bbe5c0adc89a0027927a0eb"} Nov 28 12:06:38 crc kubenswrapper[5030]: I1128 12:06:38.175759 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vg7xg" event={"ID":"032fe48e-074f-4471-80f1-940c9a22e1b3","Type":"ContainerStarted","Data":"92bfd3682f47932bde997aec766d912f20ce843f50d90e4491dbd6519bd911d1"} Nov 28 12:06:38 crc kubenswrapper[5030]: I1128 12:06:38.176575 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vg7xg" event={"ID":"032fe48e-074f-4471-80f1-940c9a22e1b3","Type":"ContainerStarted","Data":"73e6e852f6936ce290da012dd59185f836f19d0442f3885ca6b784262ad89d4a"} Nov 28 12:06:38 crc kubenswrapper[5030]: I1128 12:06:38.176588 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vg7xg" event={"ID":"032fe48e-074f-4471-80f1-940c9a22e1b3","Type":"ContainerStarted","Data":"f0f2e900bb679f19197f43375446dfcbec8c78d559fbff476a9332c254a4ab18"} Nov 28 12:06:38 crc kubenswrapper[5030]: I1128 12:06:38.176601 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vg7xg" event={"ID":"032fe48e-074f-4471-80f1-940c9a22e1b3","Type":"ContainerStarted","Data":"db6eb2ca367e4433aba0c2a48d61459ac61010770834e101df0bcd916bce9503"} Nov 28 12:06:38 crc kubenswrapper[5030]: I1128 12:06:38.176611 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vg7xg" event={"ID":"032fe48e-074f-4471-80f1-940c9a22e1b3","Type":"ContainerStarted","Data":"f6db962a545c089d990cf7f280c8808bffd97b22b00430f2ae404f3328802add"} Nov 28 12:06:39 crc kubenswrapper[5030]: I1128 12:06:39.192352 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vg7xg" event={"ID":"032fe48e-074f-4471-80f1-940c9a22e1b3","Type":"ContainerStarted","Data":"f5cc5c08f8538c1a4d3a1806fba644a1a0b691bc91cea3519bc698bcc0360f05"} Nov 28 12:06:39 crc kubenswrapper[5030]: I1128 12:06:39.193023 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:39 crc kubenswrapper[5030]: I1128 12:06:39.234371 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-vg7xg" podStartSLOduration=6.929622624 podStartE2EDuration="14.234343407s" podCreationTimestamp="2025-11-28 12:06:25 +0000 UTC" firstStartedPulling="2025-11-28 12:06:26.872016479 +0000 UTC m=+804.813759172" lastFinishedPulling="2025-11-28 12:06:34.176737272 +0000 UTC m=+812.118479955" observedRunningTime="2025-11-28 12:06:39.225982071 +0000 UTC m=+817.167724764" watchObservedRunningTime="2025-11-28 12:06:39.234343407 +0000 UTC m=+817.176086130" Nov 28 12:06:39 crc kubenswrapper[5030]: I1128 12:06:39.279363 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-gr75f" Nov 28 12:06:40 crc kubenswrapper[5030]: I1128 12:06:40.572822 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:40 crc kubenswrapper[5030]: I1128 12:06:40.641494 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:45 crc kubenswrapper[5030]: I1128 12:06:45.386096 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-v4zx5"] Nov 28 12:06:45 crc kubenswrapper[5030]: I1128 12:06:45.387979 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-v4zx5" Nov 28 12:06:45 crc kubenswrapper[5030]: I1128 12:06:45.391058 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 28 12:06:45 crc kubenswrapper[5030]: I1128 12:06:45.391356 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-nz9sf" Nov 28 12:06:45 crc kubenswrapper[5030]: I1128 12:06:45.393561 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 28 12:06:45 crc kubenswrapper[5030]: I1128 12:06:45.432840 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-v4zx5"] Nov 28 12:06:45 crc kubenswrapper[5030]: I1128 12:06:45.462853 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7ds5\" (UniqueName: \"kubernetes.io/projected/c677b703-b85b-4107-87ff-6c7c8db609dc-kube-api-access-w7ds5\") pod \"mariadb-operator-index-v4zx5\" (UID: \"c677b703-b85b-4107-87ff-6c7c8db609dc\") " pod="openstack-operators/mariadb-operator-index-v4zx5" Nov 28 12:06:45 crc kubenswrapper[5030]: I1128 12:06:45.565954 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7ds5\" (UniqueName: \"kubernetes.io/projected/c677b703-b85b-4107-87ff-6c7c8db609dc-kube-api-access-w7ds5\") pod \"mariadb-operator-index-v4zx5\" (UID: \"c677b703-b85b-4107-87ff-6c7c8db609dc\") " pod="openstack-operators/mariadb-operator-index-v4zx5" Nov 28 12:06:45 crc kubenswrapper[5030]: I1128 12:06:45.589943 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7ds5\" (UniqueName: \"kubernetes.io/projected/c677b703-b85b-4107-87ff-6c7c8db609dc-kube-api-access-w7ds5\") pod \"mariadb-operator-index-v4zx5\" (UID: \"c677b703-b85b-4107-87ff-6c7c8db609dc\") " pod="openstack-operators/mariadb-operator-index-v4zx5" Nov 28 12:06:45 crc kubenswrapper[5030]: I1128 12:06:45.599005 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m8487" Nov 28 12:06:45 crc kubenswrapper[5030]: I1128 12:06:45.707642 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-h6zgg" Nov 28 12:06:45 crc kubenswrapper[5030]: I1128 12:06:45.731750 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-v4zx5" Nov 28 12:06:45 crc kubenswrapper[5030]: I1128 12:06:45.993425 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-v4zx5"] Nov 28 12:06:45 crc kubenswrapper[5030]: W1128 12:06:45.998897 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc677b703_b85b_4107_87ff_6c7c8db609dc.slice/crio-a3c5a7e4f6f7d41ac0f994a0f732035df18e8e4b78b3cc6c9a12c5977a4d2d18 WatchSource:0}: Error finding container a3c5a7e4f6f7d41ac0f994a0f732035df18e8e4b78b3cc6c9a12c5977a4d2d18: Status 404 returned error can't find the container with id a3c5a7e4f6f7d41ac0f994a0f732035df18e8e4b78b3cc6c9a12c5977a4d2d18 Nov 28 12:06:46 crc kubenswrapper[5030]: I1128 12:06:46.256457 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-v4zx5" event={"ID":"c677b703-b85b-4107-87ff-6c7c8db609dc","Type":"ContainerStarted","Data":"a3c5a7e4f6f7d41ac0f994a0f732035df18e8e4b78b3cc6c9a12c5977a4d2d18"} Nov 28 12:06:47 crc kubenswrapper[5030]: I1128 12:06:47.548251 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-v4zx5"] Nov 28 12:06:47 crc kubenswrapper[5030]: I1128 12:06:47.957762 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-llltt"] Nov 28 12:06:47 crc kubenswrapper[5030]: I1128 12:06:47.959005 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-llltt" Nov 28 12:06:47 crc kubenswrapper[5030]: I1128 12:06:47.994012 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-llltt"] Nov 28 12:06:48 crc kubenswrapper[5030]: I1128 12:06:48.004997 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zplt\" (UniqueName: \"kubernetes.io/projected/c132b5f7-718b-4f93-9589-ae208ff59e29-kube-api-access-6zplt\") pod \"mariadb-operator-index-llltt\" (UID: \"c132b5f7-718b-4f93-9589-ae208ff59e29\") " pod="openstack-operators/mariadb-operator-index-llltt" Nov 28 12:06:48 crc kubenswrapper[5030]: I1128 12:06:48.107229 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zplt\" (UniqueName: \"kubernetes.io/projected/c132b5f7-718b-4f93-9589-ae208ff59e29-kube-api-access-6zplt\") pod \"mariadb-operator-index-llltt\" (UID: \"c132b5f7-718b-4f93-9589-ae208ff59e29\") " pod="openstack-operators/mariadb-operator-index-llltt" Nov 28 12:06:48 crc kubenswrapper[5030]: I1128 12:06:48.143352 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zplt\" (UniqueName: \"kubernetes.io/projected/c132b5f7-718b-4f93-9589-ae208ff59e29-kube-api-access-6zplt\") pod \"mariadb-operator-index-llltt\" (UID: \"c132b5f7-718b-4f93-9589-ae208ff59e29\") " pod="openstack-operators/mariadb-operator-index-llltt" Nov 28 12:06:48 crc kubenswrapper[5030]: I1128 12:06:48.293653 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-llltt" Nov 28 12:06:48 crc kubenswrapper[5030]: I1128 12:06:48.561102 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-llltt"] Nov 28 12:06:48 crc kubenswrapper[5030]: W1128 12:06:48.570109 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc132b5f7_718b_4f93_9589_ae208ff59e29.slice/crio-4718e25797902c9b8c1984f883cca0cb5f20557bf16f3f1b032375014310c1c3 WatchSource:0}: Error finding container 4718e25797902c9b8c1984f883cca0cb5f20557bf16f3f1b032375014310c1c3: Status 404 returned error can't find the container with id 4718e25797902c9b8c1984f883cca0cb5f20557bf16f3f1b032375014310c1c3 Nov 28 12:06:49 crc kubenswrapper[5030]: I1128 12:06:49.352643 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-llltt" event={"ID":"c132b5f7-718b-4f93-9589-ae208ff59e29","Type":"ContainerStarted","Data":"4718e25797902c9b8c1984f883cca0cb5f20557bf16f3f1b032375014310c1c3"} Nov 28 12:06:55 crc kubenswrapper[5030]: I1128 12:06:55.576179 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-vg7xg" Nov 28 12:06:57 crc kubenswrapper[5030]: I1128 12:06:57.436105 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-llltt" event={"ID":"c132b5f7-718b-4f93-9589-ae208ff59e29","Type":"ContainerStarted","Data":"b472ba775b67dc88ee5c99b16078d94b490b155e1177e163917ad2460abfdff5"} Nov 28 12:06:57 crc kubenswrapper[5030]: I1128 12:06:57.439772 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-v4zx5" event={"ID":"c677b703-b85b-4107-87ff-6c7c8db609dc","Type":"ContainerStarted","Data":"124d66145d7d5c560163f8bb40672d468354c79d8864f8dd3831faef20dd413d"} Nov 28 12:06:57 crc kubenswrapper[5030]: I1128 12:06:57.439993 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-v4zx5" podUID="c677b703-b85b-4107-87ff-6c7c8db609dc" containerName="registry-server" containerID="cri-o://124d66145d7d5c560163f8bb40672d468354c79d8864f8dd3831faef20dd413d" gracePeriod=2 Nov 28 12:06:57 crc kubenswrapper[5030]: I1128 12:06:57.465749 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-llltt" podStartSLOduration=2.229170835 podStartE2EDuration="10.465715202s" podCreationTimestamp="2025-11-28 12:06:47 +0000 UTC" firstStartedPulling="2025-11-28 12:06:48.57490412 +0000 UTC m=+826.516646803" lastFinishedPulling="2025-11-28 12:06:56.811448457 +0000 UTC m=+834.753191170" observedRunningTime="2025-11-28 12:06:57.457441918 +0000 UTC m=+835.399184641" watchObservedRunningTime="2025-11-28 12:06:57.465715202 +0000 UTC m=+835.407457915" Nov 28 12:06:57 crc kubenswrapper[5030]: I1128 12:06:57.914647 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-v4zx5" Nov 28 12:06:58 crc kubenswrapper[5030]: I1128 12:06:58.002385 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7ds5\" (UniqueName: \"kubernetes.io/projected/c677b703-b85b-4107-87ff-6c7c8db609dc-kube-api-access-w7ds5\") pod \"c677b703-b85b-4107-87ff-6c7c8db609dc\" (UID: \"c677b703-b85b-4107-87ff-6c7c8db609dc\") " Nov 28 12:06:58 crc kubenswrapper[5030]: I1128 12:06:58.015780 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c677b703-b85b-4107-87ff-6c7c8db609dc-kube-api-access-w7ds5" (OuterVolumeSpecName: "kube-api-access-w7ds5") pod "c677b703-b85b-4107-87ff-6c7c8db609dc" (UID: "c677b703-b85b-4107-87ff-6c7c8db609dc"). InnerVolumeSpecName "kube-api-access-w7ds5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:06:58 crc kubenswrapper[5030]: I1128 12:06:58.103498 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7ds5\" (UniqueName: \"kubernetes.io/projected/c677b703-b85b-4107-87ff-6c7c8db609dc-kube-api-access-w7ds5\") on node \"crc\" DevicePath \"\"" Nov 28 12:06:58 crc kubenswrapper[5030]: I1128 12:06:58.294585 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-llltt" Nov 28 12:06:58 crc kubenswrapper[5030]: I1128 12:06:58.294651 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-llltt" Nov 28 12:06:58 crc kubenswrapper[5030]: I1128 12:06:58.348754 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-llltt" Nov 28 12:06:58 crc kubenswrapper[5030]: I1128 12:06:58.451035 5030 generic.go:334] "Generic (PLEG): container finished" podID="c677b703-b85b-4107-87ff-6c7c8db609dc" containerID="124d66145d7d5c560163f8bb40672d468354c79d8864f8dd3831faef20dd413d" exitCode=0 Nov 28 12:06:58 crc kubenswrapper[5030]: I1128 12:06:58.451117 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-v4zx5" event={"ID":"c677b703-b85b-4107-87ff-6c7c8db609dc","Type":"ContainerDied","Data":"124d66145d7d5c560163f8bb40672d468354c79d8864f8dd3831faef20dd413d"} Nov 28 12:06:58 crc kubenswrapper[5030]: I1128 12:06:58.451194 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-v4zx5" event={"ID":"c677b703-b85b-4107-87ff-6c7c8db609dc","Type":"ContainerDied","Data":"a3c5a7e4f6f7d41ac0f994a0f732035df18e8e4b78b3cc6c9a12c5977a4d2d18"} Nov 28 12:06:58 crc kubenswrapper[5030]: I1128 12:06:58.451212 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-v4zx5" Nov 28 12:06:58 crc kubenswrapper[5030]: I1128 12:06:58.451232 5030 scope.go:117] "RemoveContainer" containerID="124d66145d7d5c560163f8bb40672d468354c79d8864f8dd3831faef20dd413d" Nov 28 12:06:58 crc kubenswrapper[5030]: I1128 12:06:58.480337 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-v4zx5"] Nov 28 12:06:58 crc kubenswrapper[5030]: I1128 12:06:58.487134 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-v4zx5"] Nov 28 12:06:58 crc kubenswrapper[5030]: I1128 12:06:58.488898 5030 scope.go:117] "RemoveContainer" containerID="124d66145d7d5c560163f8bb40672d468354c79d8864f8dd3831faef20dd413d" Nov 28 12:06:58 crc kubenswrapper[5030]: E1128 12:06:58.490732 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"124d66145d7d5c560163f8bb40672d468354c79d8864f8dd3831faef20dd413d\": container with ID starting with 124d66145d7d5c560163f8bb40672d468354c79d8864f8dd3831faef20dd413d not found: ID does not exist" containerID="124d66145d7d5c560163f8bb40672d468354c79d8864f8dd3831faef20dd413d" Nov 28 12:06:58 crc kubenswrapper[5030]: I1128 12:06:58.490798 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"124d66145d7d5c560163f8bb40672d468354c79d8864f8dd3831faef20dd413d"} err="failed to get container status \"124d66145d7d5c560163f8bb40672d468354c79d8864f8dd3831faef20dd413d\": rpc error: code = NotFound desc = could not find container \"124d66145d7d5c560163f8bb40672d468354c79d8864f8dd3831faef20dd413d\": container with ID starting with 124d66145d7d5c560163f8bb40672d468354c79d8864f8dd3831faef20dd413d not found: ID does not exist" Nov 28 12:07:00 crc kubenswrapper[5030]: I1128 12:07:00.403990 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c677b703-b85b-4107-87ff-6c7c8db609dc" path="/var/lib/kubelet/pods/c677b703-b85b-4107-87ff-6c7c8db609dc/volumes" Nov 28 12:07:08 crc kubenswrapper[5030]: I1128 12:07:08.336725 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-llltt" Nov 28 12:07:13 crc kubenswrapper[5030]: I1128 12:07:13.962309 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk"] Nov 28 12:07:13 crc kubenswrapper[5030]: E1128 12:07:13.963040 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c677b703-b85b-4107-87ff-6c7c8db609dc" containerName="registry-server" Nov 28 12:07:13 crc kubenswrapper[5030]: I1128 12:07:13.963061 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="c677b703-b85b-4107-87ff-6c7c8db609dc" containerName="registry-server" Nov 28 12:07:13 crc kubenswrapper[5030]: I1128 12:07:13.963265 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="c677b703-b85b-4107-87ff-6c7c8db609dc" containerName="registry-server" Nov 28 12:07:13 crc kubenswrapper[5030]: I1128 12:07:13.964935 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk" Nov 28 12:07:13 crc kubenswrapper[5030]: I1128 12:07:13.969774 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-br5mr" Nov 28 12:07:13 crc kubenswrapper[5030]: I1128 12:07:13.982897 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk"] Nov 28 12:07:13 crc kubenswrapper[5030]: I1128 12:07:13.986291 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6lvs\" (UniqueName: \"kubernetes.io/projected/188bc9f2-ac35-4a70-a6f2-8d691c351ef8-kube-api-access-c6lvs\") pod \"27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk\" (UID: \"188bc9f2-ac35-4a70-a6f2-8d691c351ef8\") " pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk" Nov 28 12:07:13 crc kubenswrapper[5030]: I1128 12:07:13.986828 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/188bc9f2-ac35-4a70-a6f2-8d691c351ef8-bundle\") pod \"27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk\" (UID: \"188bc9f2-ac35-4a70-a6f2-8d691c351ef8\") " pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk" Nov 28 12:07:13 crc kubenswrapper[5030]: I1128 12:07:13.987042 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/188bc9f2-ac35-4a70-a6f2-8d691c351ef8-util\") pod \"27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk\" (UID: \"188bc9f2-ac35-4a70-a6f2-8d691c351ef8\") " pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk" Nov 28 12:07:14 crc kubenswrapper[5030]: I1128 12:07:14.087897 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6lvs\" (UniqueName: \"kubernetes.io/projected/188bc9f2-ac35-4a70-a6f2-8d691c351ef8-kube-api-access-c6lvs\") pod \"27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk\" (UID: \"188bc9f2-ac35-4a70-a6f2-8d691c351ef8\") " pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk" Nov 28 12:07:14 crc kubenswrapper[5030]: I1128 12:07:14.088287 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/188bc9f2-ac35-4a70-a6f2-8d691c351ef8-bundle\") pod \"27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk\" (UID: \"188bc9f2-ac35-4a70-a6f2-8d691c351ef8\") " pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk" Nov 28 12:07:14 crc kubenswrapper[5030]: I1128 12:07:14.088442 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/188bc9f2-ac35-4a70-a6f2-8d691c351ef8-util\") pod \"27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk\" (UID: \"188bc9f2-ac35-4a70-a6f2-8d691c351ef8\") " pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk" Nov 28 12:07:14 crc kubenswrapper[5030]: I1128 12:07:14.088773 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/188bc9f2-ac35-4a70-a6f2-8d691c351ef8-bundle\") pod \"27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk\" (UID: \"188bc9f2-ac35-4a70-a6f2-8d691c351ef8\") " pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk" Nov 28 12:07:14 crc kubenswrapper[5030]: I1128 12:07:14.088939 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/188bc9f2-ac35-4a70-a6f2-8d691c351ef8-util\") pod \"27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk\" (UID: \"188bc9f2-ac35-4a70-a6f2-8d691c351ef8\") " pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk" Nov 28 12:07:14 crc kubenswrapper[5030]: I1128 12:07:14.108884 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6lvs\" (UniqueName: \"kubernetes.io/projected/188bc9f2-ac35-4a70-a6f2-8d691c351ef8-kube-api-access-c6lvs\") pod \"27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk\" (UID: \"188bc9f2-ac35-4a70-a6f2-8d691c351ef8\") " pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk" Nov 28 12:07:14 crc kubenswrapper[5030]: I1128 12:07:14.349893 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk" Nov 28 12:07:14 crc kubenswrapper[5030]: I1128 12:07:14.643189 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk"] Nov 28 12:07:15 crc kubenswrapper[5030]: I1128 12:07:15.605384 5030 generic.go:334] "Generic (PLEG): container finished" podID="188bc9f2-ac35-4a70-a6f2-8d691c351ef8" containerID="d7fc20b4c9e8fb32482c2c4202c14a8391b88f395d1d28f74e8a0f7d11f3b9db" exitCode=0 Nov 28 12:07:15 crc kubenswrapper[5030]: I1128 12:07:15.605620 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk" event={"ID":"188bc9f2-ac35-4a70-a6f2-8d691c351ef8","Type":"ContainerDied","Data":"d7fc20b4c9e8fb32482c2c4202c14a8391b88f395d1d28f74e8a0f7d11f3b9db"} Nov 28 12:07:15 crc kubenswrapper[5030]: I1128 12:07:15.606528 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk" event={"ID":"188bc9f2-ac35-4a70-a6f2-8d691c351ef8","Type":"ContainerStarted","Data":"faff1218192b7647473bdc99d2132c458cceb2c183d3b7c5f977bf8259fd7abc"} Nov 28 12:07:17 crc kubenswrapper[5030]: I1128 12:07:17.627253 5030 generic.go:334] "Generic (PLEG): container finished" podID="188bc9f2-ac35-4a70-a6f2-8d691c351ef8" containerID="c6b5463e46e374dab436bf4f29187ee448fc838697e5790bfa7b6a4eeb219eb8" exitCode=0 Nov 28 12:07:17 crc kubenswrapper[5030]: I1128 12:07:17.627946 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk" event={"ID":"188bc9f2-ac35-4a70-a6f2-8d691c351ef8","Type":"ContainerDied","Data":"c6b5463e46e374dab436bf4f29187ee448fc838697e5790bfa7b6a4eeb219eb8"} Nov 28 12:07:18 crc kubenswrapper[5030]: I1128 12:07:18.638308 5030 generic.go:334] "Generic (PLEG): container finished" podID="188bc9f2-ac35-4a70-a6f2-8d691c351ef8" containerID="44e0f156778eb7c0ad2ac3186629ee6fba4d35c6b5ba0a7b6cc74616eb4b7c23" exitCode=0 Nov 28 12:07:18 crc kubenswrapper[5030]: I1128 12:07:18.638396 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk" event={"ID":"188bc9f2-ac35-4a70-a6f2-8d691c351ef8","Type":"ContainerDied","Data":"44e0f156778eb7c0ad2ac3186629ee6fba4d35c6b5ba0a7b6cc74616eb4b7c23"} Nov 28 12:07:20 crc kubenswrapper[5030]: I1128 12:07:20.004487 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk" Nov 28 12:07:20 crc kubenswrapper[5030]: I1128 12:07:20.089192 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/188bc9f2-ac35-4a70-a6f2-8d691c351ef8-util\") pod \"188bc9f2-ac35-4a70-a6f2-8d691c351ef8\" (UID: \"188bc9f2-ac35-4a70-a6f2-8d691c351ef8\") " Nov 28 12:07:20 crc kubenswrapper[5030]: I1128 12:07:20.089363 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6lvs\" (UniqueName: \"kubernetes.io/projected/188bc9f2-ac35-4a70-a6f2-8d691c351ef8-kube-api-access-c6lvs\") pod \"188bc9f2-ac35-4a70-a6f2-8d691c351ef8\" (UID: \"188bc9f2-ac35-4a70-a6f2-8d691c351ef8\") " Nov 28 12:07:20 crc kubenswrapper[5030]: I1128 12:07:20.089456 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/188bc9f2-ac35-4a70-a6f2-8d691c351ef8-bundle\") pod \"188bc9f2-ac35-4a70-a6f2-8d691c351ef8\" (UID: \"188bc9f2-ac35-4a70-a6f2-8d691c351ef8\") " Nov 28 12:07:20 crc kubenswrapper[5030]: I1128 12:07:20.091672 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/188bc9f2-ac35-4a70-a6f2-8d691c351ef8-bundle" (OuterVolumeSpecName: "bundle") pod "188bc9f2-ac35-4a70-a6f2-8d691c351ef8" (UID: "188bc9f2-ac35-4a70-a6f2-8d691c351ef8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:07:20 crc kubenswrapper[5030]: I1128 12:07:20.100892 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/188bc9f2-ac35-4a70-a6f2-8d691c351ef8-kube-api-access-c6lvs" (OuterVolumeSpecName: "kube-api-access-c6lvs") pod "188bc9f2-ac35-4a70-a6f2-8d691c351ef8" (UID: "188bc9f2-ac35-4a70-a6f2-8d691c351ef8"). InnerVolumeSpecName "kube-api-access-c6lvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:07:20 crc kubenswrapper[5030]: I1128 12:07:20.191227 5030 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/188bc9f2-ac35-4a70-a6f2-8d691c351ef8-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:07:20 crc kubenswrapper[5030]: I1128 12:07:20.191684 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6lvs\" (UniqueName: \"kubernetes.io/projected/188bc9f2-ac35-4a70-a6f2-8d691c351ef8-kube-api-access-c6lvs\") on node \"crc\" DevicePath \"\"" Nov 28 12:07:20 crc kubenswrapper[5030]: I1128 12:07:20.317419 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/188bc9f2-ac35-4a70-a6f2-8d691c351ef8-util" (OuterVolumeSpecName: "util") pod "188bc9f2-ac35-4a70-a6f2-8d691c351ef8" (UID: "188bc9f2-ac35-4a70-a6f2-8d691c351ef8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:07:20 crc kubenswrapper[5030]: I1128 12:07:20.395662 5030 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/188bc9f2-ac35-4a70-a6f2-8d691c351ef8-util\") on node \"crc\" DevicePath \"\"" Nov 28 12:07:20 crc kubenswrapper[5030]: I1128 12:07:20.680667 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk" event={"ID":"188bc9f2-ac35-4a70-a6f2-8d691c351ef8","Type":"ContainerDied","Data":"faff1218192b7647473bdc99d2132c458cceb2c183d3b7c5f977bf8259fd7abc"} Nov 28 12:07:20 crc kubenswrapper[5030]: I1128 12:07:20.680730 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faff1218192b7647473bdc99d2132c458cceb2c183d3b7c5f977bf8259fd7abc" Nov 28 12:07:20 crc kubenswrapper[5030]: I1128 12:07:20.680839 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.164836 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7cdbb9546b-2xp4v"] Nov 28 12:07:27 crc kubenswrapper[5030]: E1128 12:07:27.165575 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188bc9f2-ac35-4a70-a6f2-8d691c351ef8" containerName="extract" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.165587 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="188bc9f2-ac35-4a70-a6f2-8d691c351ef8" containerName="extract" Nov 28 12:07:27 crc kubenswrapper[5030]: E1128 12:07:27.165603 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188bc9f2-ac35-4a70-a6f2-8d691c351ef8" containerName="util" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.165609 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="188bc9f2-ac35-4a70-a6f2-8d691c351ef8" containerName="util" Nov 28 12:07:27 crc kubenswrapper[5030]: E1128 12:07:27.165620 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188bc9f2-ac35-4a70-a6f2-8d691c351ef8" containerName="pull" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.165627 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="188bc9f2-ac35-4a70-a6f2-8d691c351ef8" containerName="pull" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.165728 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="188bc9f2-ac35-4a70-a6f2-8d691c351ef8" containerName="extract" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.166092 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7cdbb9546b-2xp4v" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.169004 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-8x5zz" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.169567 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.171226 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.188534 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7cdbb9546b-2xp4v"] Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.203482 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58z55\" (UniqueName: \"kubernetes.io/projected/4ad0efc8-bb7f-4a51-9ca8-a929626c3a29-kube-api-access-58z55\") pod \"mariadb-operator-controller-manager-7cdbb9546b-2xp4v\" (UID: \"4ad0efc8-bb7f-4a51-9ca8-a929626c3a29\") " pod="openstack-operators/mariadb-operator-controller-manager-7cdbb9546b-2xp4v" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.203586 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ad0efc8-bb7f-4a51-9ca8-a929626c3a29-apiservice-cert\") pod \"mariadb-operator-controller-manager-7cdbb9546b-2xp4v\" (UID: \"4ad0efc8-bb7f-4a51-9ca8-a929626c3a29\") " pod="openstack-operators/mariadb-operator-controller-manager-7cdbb9546b-2xp4v" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.203625 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ad0efc8-bb7f-4a51-9ca8-a929626c3a29-webhook-cert\") pod \"mariadb-operator-controller-manager-7cdbb9546b-2xp4v\" (UID: \"4ad0efc8-bb7f-4a51-9ca8-a929626c3a29\") " pod="openstack-operators/mariadb-operator-controller-manager-7cdbb9546b-2xp4v" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.305448 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58z55\" (UniqueName: \"kubernetes.io/projected/4ad0efc8-bb7f-4a51-9ca8-a929626c3a29-kube-api-access-58z55\") pod \"mariadb-operator-controller-manager-7cdbb9546b-2xp4v\" (UID: \"4ad0efc8-bb7f-4a51-9ca8-a929626c3a29\") " pod="openstack-operators/mariadb-operator-controller-manager-7cdbb9546b-2xp4v" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.305542 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ad0efc8-bb7f-4a51-9ca8-a929626c3a29-apiservice-cert\") pod \"mariadb-operator-controller-manager-7cdbb9546b-2xp4v\" (UID: \"4ad0efc8-bb7f-4a51-9ca8-a929626c3a29\") " pod="openstack-operators/mariadb-operator-controller-manager-7cdbb9546b-2xp4v" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.305580 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ad0efc8-bb7f-4a51-9ca8-a929626c3a29-webhook-cert\") pod \"mariadb-operator-controller-manager-7cdbb9546b-2xp4v\" (UID: \"4ad0efc8-bb7f-4a51-9ca8-a929626c3a29\") " pod="openstack-operators/mariadb-operator-controller-manager-7cdbb9546b-2xp4v" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.318596 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4ad0efc8-bb7f-4a51-9ca8-a929626c3a29-webhook-cert\") pod \"mariadb-operator-controller-manager-7cdbb9546b-2xp4v\" (UID: \"4ad0efc8-bb7f-4a51-9ca8-a929626c3a29\") " pod="openstack-operators/mariadb-operator-controller-manager-7cdbb9546b-2xp4v" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.324078 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58z55\" (UniqueName: \"kubernetes.io/projected/4ad0efc8-bb7f-4a51-9ca8-a929626c3a29-kube-api-access-58z55\") pod \"mariadb-operator-controller-manager-7cdbb9546b-2xp4v\" (UID: \"4ad0efc8-bb7f-4a51-9ca8-a929626c3a29\") " pod="openstack-operators/mariadb-operator-controller-manager-7cdbb9546b-2xp4v" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.326412 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4ad0efc8-bb7f-4a51-9ca8-a929626c3a29-apiservice-cert\") pod \"mariadb-operator-controller-manager-7cdbb9546b-2xp4v\" (UID: \"4ad0efc8-bb7f-4a51-9ca8-a929626c3a29\") " pod="openstack-operators/mariadb-operator-controller-manager-7cdbb9546b-2xp4v" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.487516 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7cdbb9546b-2xp4v" Nov 28 12:07:27 crc kubenswrapper[5030]: I1128 12:07:27.735642 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7cdbb9546b-2xp4v"] Nov 28 12:07:27 crc kubenswrapper[5030]: W1128 12:07:27.748661 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ad0efc8_bb7f_4a51_9ca8_a929626c3a29.slice/crio-af682faff5850144be47b152fb27641398f5a6befc1638b40088316bd62a0049 WatchSource:0}: Error finding container af682faff5850144be47b152fb27641398f5a6befc1638b40088316bd62a0049: Status 404 returned error can't find the container with id af682faff5850144be47b152fb27641398f5a6befc1638b40088316bd62a0049 Nov 28 12:07:28 crc kubenswrapper[5030]: I1128 12:07:28.748711 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7cdbb9546b-2xp4v" event={"ID":"4ad0efc8-bb7f-4a51-9ca8-a929626c3a29","Type":"ContainerStarted","Data":"af682faff5850144be47b152fb27641398f5a6befc1638b40088316bd62a0049"} Nov 28 12:07:35 crc kubenswrapper[5030]: I1128 12:07:35.809452 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7cdbb9546b-2xp4v" event={"ID":"4ad0efc8-bb7f-4a51-9ca8-a929626c3a29","Type":"ContainerStarted","Data":"0dbe3284f84e93a672196e24fca99748623adb1d73d3db940992eb959c162a77"} Nov 28 12:07:35 crc kubenswrapper[5030]: I1128 12:07:35.810221 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7cdbb9546b-2xp4v" Nov 28 12:07:35 crc kubenswrapper[5030]: I1128 12:07:35.830382 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7cdbb9546b-2xp4v" podStartSLOduration=1.501671472 podStartE2EDuration="8.83036069s" podCreationTimestamp="2025-11-28 12:07:27 +0000 UTC" firstStartedPulling="2025-11-28 12:07:27.753307044 +0000 UTC m=+865.695049717" lastFinishedPulling="2025-11-28 12:07:35.081996232 +0000 UTC m=+873.023738935" observedRunningTime="2025-11-28 12:07:35.829827386 +0000 UTC m=+873.771570109" watchObservedRunningTime="2025-11-28 12:07:35.83036069 +0000 UTC m=+873.772103373" Nov 28 12:07:47 crc kubenswrapper[5030]: I1128 12:07:47.495207 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7cdbb9546b-2xp4v" Nov 28 12:07:50 crc kubenswrapper[5030]: I1128 12:07:50.461885 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-bl4lw"] Nov 28 12:07:50 crc kubenswrapper[5030]: I1128 12:07:50.469932 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-bl4lw" Nov 28 12:07:50 crc kubenswrapper[5030]: I1128 12:07:50.472957 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-4wnl5" Nov 28 12:07:50 crc kubenswrapper[5030]: I1128 12:07:50.500611 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-bl4lw"] Nov 28 12:07:50 crc kubenswrapper[5030]: I1128 12:07:50.567360 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgpln\" (UniqueName: \"kubernetes.io/projected/e8a27e50-8b73-453a-ad62-196f06f477da-kube-api-access-wgpln\") pod \"infra-operator-index-bl4lw\" (UID: \"e8a27e50-8b73-453a-ad62-196f06f477da\") " pod="openstack-operators/infra-operator-index-bl4lw" Nov 28 12:07:50 crc kubenswrapper[5030]: I1128 12:07:50.669961 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgpln\" (UniqueName: \"kubernetes.io/projected/e8a27e50-8b73-453a-ad62-196f06f477da-kube-api-access-wgpln\") pod \"infra-operator-index-bl4lw\" (UID: \"e8a27e50-8b73-453a-ad62-196f06f477da\") " pod="openstack-operators/infra-operator-index-bl4lw" Nov 28 12:07:50 crc kubenswrapper[5030]: I1128 12:07:50.697528 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgpln\" (UniqueName: \"kubernetes.io/projected/e8a27e50-8b73-453a-ad62-196f06f477da-kube-api-access-wgpln\") pod \"infra-operator-index-bl4lw\" (UID: \"e8a27e50-8b73-453a-ad62-196f06f477da\") " pod="openstack-operators/infra-operator-index-bl4lw" Nov 28 12:07:50 crc kubenswrapper[5030]: I1128 12:07:50.807519 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-bl4lw" Nov 28 12:07:51 crc kubenswrapper[5030]: I1128 12:07:51.088568 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-bl4lw"] Nov 28 12:07:51 crc kubenswrapper[5030]: I1128 12:07:51.953047 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-bl4lw" event={"ID":"e8a27e50-8b73-453a-ad62-196f06f477da","Type":"ContainerStarted","Data":"56eeb5becda39ebf5de79eed457abcd174616faf785abbee18f3c8c9e4a95b56"} Nov 28 12:07:52 crc kubenswrapper[5030]: I1128 12:07:52.962667 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-bl4lw" event={"ID":"e8a27e50-8b73-453a-ad62-196f06f477da","Type":"ContainerStarted","Data":"23a5bbece886d2d50595dffd3a15f348dc920aafff520c02568ac5a8d0c3bb2a"} Nov 28 12:07:52 crc kubenswrapper[5030]: I1128 12:07:52.983971 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-bl4lw" podStartSLOduration=1.404641282 podStartE2EDuration="2.983943368s" podCreationTimestamp="2025-11-28 12:07:50 +0000 UTC" firstStartedPulling="2025-11-28 12:07:51.104621007 +0000 UTC m=+889.046363690" lastFinishedPulling="2025-11-28 12:07:52.683923053 +0000 UTC m=+890.625665776" observedRunningTime="2025-11-28 12:07:52.982118779 +0000 UTC m=+890.923861462" watchObservedRunningTime="2025-11-28 12:07:52.983943368 +0000 UTC m=+890.925686081" Nov 28 12:07:54 crc kubenswrapper[5030]: I1128 12:07:54.248645 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-bl4lw"] Nov 28 12:07:54 crc kubenswrapper[5030]: I1128 12:07:54.868685 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-g8m7z"] Nov 28 12:07:54 crc kubenswrapper[5030]: I1128 12:07:54.869667 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-g8m7z" Nov 28 12:07:54 crc kubenswrapper[5030]: I1128 12:07:54.878347 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-g8m7z"] Nov 28 12:07:54 crc kubenswrapper[5030]: I1128 12:07:54.975219 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-bl4lw" podUID="e8a27e50-8b73-453a-ad62-196f06f477da" containerName="registry-server" containerID="cri-o://23a5bbece886d2d50595dffd3a15f348dc920aafff520c02568ac5a8d0c3bb2a" gracePeriod=2 Nov 28 12:07:55 crc kubenswrapper[5030]: I1128 12:07:55.063096 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsqrt\" (UniqueName: \"kubernetes.io/projected/08a84de3-578b-42c2-8ca8-6ed063ab0d71-kube-api-access-lsqrt\") pod \"infra-operator-index-g8m7z\" (UID: \"08a84de3-578b-42c2-8ca8-6ed063ab0d71\") " pod="openstack-operators/infra-operator-index-g8m7z" Nov 28 12:07:55 crc kubenswrapper[5030]: I1128 12:07:55.164742 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsqrt\" (UniqueName: \"kubernetes.io/projected/08a84de3-578b-42c2-8ca8-6ed063ab0d71-kube-api-access-lsqrt\") pod \"infra-operator-index-g8m7z\" (UID: \"08a84de3-578b-42c2-8ca8-6ed063ab0d71\") " pod="openstack-operators/infra-operator-index-g8m7z" Nov 28 12:07:55 crc kubenswrapper[5030]: I1128 12:07:55.208930 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsqrt\" (UniqueName: \"kubernetes.io/projected/08a84de3-578b-42c2-8ca8-6ed063ab0d71-kube-api-access-lsqrt\") pod \"infra-operator-index-g8m7z\" (UID: \"08a84de3-578b-42c2-8ca8-6ed063ab0d71\") " pod="openstack-operators/infra-operator-index-g8m7z" Nov 28 12:07:55 crc kubenswrapper[5030]: I1128 12:07:55.470512 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-bl4lw" Nov 28 12:07:55 crc kubenswrapper[5030]: I1128 12:07:55.483120 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-g8m7z" Nov 28 12:07:55 crc kubenswrapper[5030]: I1128 12:07:55.574612 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgpln\" (UniqueName: \"kubernetes.io/projected/e8a27e50-8b73-453a-ad62-196f06f477da-kube-api-access-wgpln\") pod \"e8a27e50-8b73-453a-ad62-196f06f477da\" (UID: \"e8a27e50-8b73-453a-ad62-196f06f477da\") " Nov 28 12:07:55 crc kubenswrapper[5030]: I1128 12:07:55.583520 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8a27e50-8b73-453a-ad62-196f06f477da-kube-api-access-wgpln" (OuterVolumeSpecName: "kube-api-access-wgpln") pod "e8a27e50-8b73-453a-ad62-196f06f477da" (UID: "e8a27e50-8b73-453a-ad62-196f06f477da"). InnerVolumeSpecName "kube-api-access-wgpln". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:07:55 crc kubenswrapper[5030]: I1128 12:07:55.680773 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgpln\" (UniqueName: \"kubernetes.io/projected/e8a27e50-8b73-453a-ad62-196f06f477da-kube-api-access-wgpln\") on node \"crc\" DevicePath \"\"" Nov 28 12:07:55 crc kubenswrapper[5030]: I1128 12:07:55.787458 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-g8m7z"] Nov 28 12:07:55 crc kubenswrapper[5030]: I1128 12:07:55.988359 5030 generic.go:334] "Generic (PLEG): container finished" podID="e8a27e50-8b73-453a-ad62-196f06f477da" containerID="23a5bbece886d2d50595dffd3a15f348dc920aafff520c02568ac5a8d0c3bb2a" exitCode=0 Nov 28 12:07:55 crc kubenswrapper[5030]: I1128 12:07:55.988443 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-bl4lw" event={"ID":"e8a27e50-8b73-453a-ad62-196f06f477da","Type":"ContainerDied","Data":"23a5bbece886d2d50595dffd3a15f348dc920aafff520c02568ac5a8d0c3bb2a"} Nov 28 12:07:55 crc kubenswrapper[5030]: I1128 12:07:55.988503 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-bl4lw" event={"ID":"e8a27e50-8b73-453a-ad62-196f06f477da","Type":"ContainerDied","Data":"56eeb5becda39ebf5de79eed457abcd174616faf785abbee18f3c8c9e4a95b56"} Nov 28 12:07:55 crc kubenswrapper[5030]: I1128 12:07:55.988533 5030 scope.go:117] "RemoveContainer" containerID="23a5bbece886d2d50595dffd3a15f348dc920aafff520c02568ac5a8d0c3bb2a" Nov 28 12:07:55 crc kubenswrapper[5030]: I1128 12:07:55.988666 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-bl4lw" Nov 28 12:07:55 crc kubenswrapper[5030]: I1128 12:07:55.992762 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-g8m7z" event={"ID":"08a84de3-578b-42c2-8ca8-6ed063ab0d71","Type":"ContainerStarted","Data":"c494f66a52db3be84e8c2b0f484c9b403b68a38c717fa2a96770e77a3149dbe2"} Nov 28 12:07:56 crc kubenswrapper[5030]: I1128 12:07:56.028801 5030 scope.go:117] "RemoveContainer" containerID="23a5bbece886d2d50595dffd3a15f348dc920aafff520c02568ac5a8d0c3bb2a" Nov 28 12:07:56 crc kubenswrapper[5030]: E1128 12:07:56.030225 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23a5bbece886d2d50595dffd3a15f348dc920aafff520c02568ac5a8d0c3bb2a\": container with ID starting with 23a5bbece886d2d50595dffd3a15f348dc920aafff520c02568ac5a8d0c3bb2a not found: ID does not exist" containerID="23a5bbece886d2d50595dffd3a15f348dc920aafff520c02568ac5a8d0c3bb2a" Nov 28 12:07:56 crc kubenswrapper[5030]: I1128 12:07:56.030287 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23a5bbece886d2d50595dffd3a15f348dc920aafff520c02568ac5a8d0c3bb2a"} err="failed to get container status \"23a5bbece886d2d50595dffd3a15f348dc920aafff520c02568ac5a8d0c3bb2a\": rpc error: code = NotFound desc = could not find container \"23a5bbece886d2d50595dffd3a15f348dc920aafff520c02568ac5a8d0c3bb2a\": container with ID starting with 23a5bbece886d2d50595dffd3a15f348dc920aafff520c02568ac5a8d0c3bb2a not found: ID does not exist" Nov 28 12:07:56 crc kubenswrapper[5030]: I1128 12:07:56.033282 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-bl4lw"] Nov 28 12:07:56 crc kubenswrapper[5030]: I1128 12:07:56.043680 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-bl4lw"] Nov 28 12:07:56 crc kubenswrapper[5030]: I1128 12:07:56.406692 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8a27e50-8b73-453a-ad62-196f06f477da" path="/var/lib/kubelet/pods/e8a27e50-8b73-453a-ad62-196f06f477da/volumes" Nov 28 12:07:57 crc kubenswrapper[5030]: I1128 12:07:57.007776 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-g8m7z" event={"ID":"08a84de3-578b-42c2-8ca8-6ed063ab0d71","Type":"ContainerStarted","Data":"3f4fd7480b22c37c7cdf19c40310494410ef3df9fe2babbf8869e82d11718243"} Nov 28 12:07:57 crc kubenswrapper[5030]: I1128 12:07:57.054505 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-g8m7z" podStartSLOduration=2.545529238 podStartE2EDuration="3.054420896s" podCreationTimestamp="2025-11-28 12:07:54 +0000 UTC" firstStartedPulling="2025-11-28 12:07:55.800641144 +0000 UTC m=+893.742383817" lastFinishedPulling="2025-11-28 12:07:56.309532792 +0000 UTC m=+894.251275475" observedRunningTime="2025-11-28 12:07:57.02977149 +0000 UTC m=+894.971514183" watchObservedRunningTime="2025-11-28 12:07:57.054420896 +0000 UTC m=+894.996163629" Nov 28 12:08:03 crc kubenswrapper[5030]: I1128 12:08:03.202412 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:08:03 crc kubenswrapper[5030]: I1128 12:08:03.203430 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:08:05 crc kubenswrapper[5030]: I1128 12:08:05.483829 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-g8m7z" Nov 28 12:08:05 crc kubenswrapper[5030]: I1128 12:08:05.483920 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-g8m7z" Nov 28 12:08:05 crc kubenswrapper[5030]: I1128 12:08:05.536071 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-g8m7z" Nov 28 12:08:06 crc kubenswrapper[5030]: I1128 12:08:06.111272 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-g8m7z" Nov 28 12:08:07 crc kubenswrapper[5030]: I1128 12:08:07.729517 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp"] Nov 28 12:08:07 crc kubenswrapper[5030]: E1128 12:08:07.729888 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a27e50-8b73-453a-ad62-196f06f477da" containerName="registry-server" Nov 28 12:08:07 crc kubenswrapper[5030]: I1128 12:08:07.729907 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a27e50-8b73-453a-ad62-196f06f477da" containerName="registry-server" Nov 28 12:08:07 crc kubenswrapper[5030]: I1128 12:08:07.730099 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8a27e50-8b73-453a-ad62-196f06f477da" containerName="registry-server" Nov 28 12:08:07 crc kubenswrapper[5030]: I1128 12:08:07.731144 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp" Nov 28 12:08:07 crc kubenswrapper[5030]: I1128 12:08:07.734794 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-br5mr" Nov 28 12:08:07 crc kubenswrapper[5030]: I1128 12:08:07.748036 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp"] Nov 28 12:08:07 crc kubenswrapper[5030]: I1128 12:08:07.894431 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz6fz\" (UniqueName: \"kubernetes.io/projected/a3fd10b8-6b32-4a76-80a1-14a3ea9b4985-kube-api-access-mz6fz\") pod \"5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp\" (UID: \"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985\") " pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp" Nov 28 12:08:07 crc kubenswrapper[5030]: I1128 12:08:07.894659 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3fd10b8-6b32-4a76-80a1-14a3ea9b4985-bundle\") pod \"5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp\" (UID: \"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985\") " pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp" Nov 28 12:08:07 crc kubenswrapper[5030]: I1128 12:08:07.894690 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3fd10b8-6b32-4a76-80a1-14a3ea9b4985-util\") pod \"5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp\" (UID: \"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985\") " pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp" Nov 28 12:08:07 crc kubenswrapper[5030]: I1128 12:08:07.996583 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3fd10b8-6b32-4a76-80a1-14a3ea9b4985-bundle\") pod \"5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp\" (UID: \"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985\") " pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp" Nov 28 12:08:07 crc kubenswrapper[5030]: I1128 12:08:07.997264 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3fd10b8-6b32-4a76-80a1-14a3ea9b4985-bundle\") pod \"5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp\" (UID: \"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985\") " pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp" Nov 28 12:08:07 crc kubenswrapper[5030]: I1128 12:08:07.997284 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3fd10b8-6b32-4a76-80a1-14a3ea9b4985-util\") pod \"5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp\" (UID: \"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985\") " pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp" Nov 28 12:08:07 crc kubenswrapper[5030]: I1128 12:08:07.997428 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz6fz\" (UniqueName: \"kubernetes.io/projected/a3fd10b8-6b32-4a76-80a1-14a3ea9b4985-kube-api-access-mz6fz\") pod \"5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp\" (UID: \"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985\") " pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp" Nov 28 12:08:07 crc kubenswrapper[5030]: I1128 12:08:07.998653 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3fd10b8-6b32-4a76-80a1-14a3ea9b4985-util\") pod \"5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp\" (UID: \"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985\") " pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp" Nov 28 12:08:08 crc kubenswrapper[5030]: I1128 12:08:08.024171 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz6fz\" (UniqueName: \"kubernetes.io/projected/a3fd10b8-6b32-4a76-80a1-14a3ea9b4985-kube-api-access-mz6fz\") pod \"5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp\" (UID: \"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985\") " pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp" Nov 28 12:08:08 crc kubenswrapper[5030]: I1128 12:08:08.054810 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp" Nov 28 12:08:08 crc kubenswrapper[5030]: I1128 12:08:08.533650 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp"] Nov 28 12:08:09 crc kubenswrapper[5030]: I1128 12:08:09.103898 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp" event={"ID":"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985","Type":"ContainerStarted","Data":"afe68a50fa12fa350e405e09f9fc59f058a8ee617e0f09d0932f04096c0d0f04"} Nov 28 12:08:10 crc kubenswrapper[5030]: I1128 12:08:10.113339 5030 generic.go:334] "Generic (PLEG): container finished" podID="a3fd10b8-6b32-4a76-80a1-14a3ea9b4985" containerID="70c6490141218866bf4a323a7aeae74d4fbc32a1f596ac857f067288cd23f4ff" exitCode=0 Nov 28 12:08:10 crc kubenswrapper[5030]: I1128 12:08:10.113760 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp" event={"ID":"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985","Type":"ContainerDied","Data":"70c6490141218866bf4a323a7aeae74d4fbc32a1f596ac857f067288cd23f4ff"} Nov 28 12:08:12 crc kubenswrapper[5030]: I1128 12:08:12.134145 5030 generic.go:334] "Generic (PLEG): container finished" podID="a3fd10b8-6b32-4a76-80a1-14a3ea9b4985" containerID="4256cc96955accaee8234a440765cab6d531dc7a0609d87b1e2754e7c0dc18f8" exitCode=0 Nov 28 12:08:12 crc kubenswrapper[5030]: I1128 12:08:12.134399 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp" event={"ID":"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985","Type":"ContainerDied","Data":"4256cc96955accaee8234a440765cab6d531dc7a0609d87b1e2754e7c0dc18f8"} Nov 28 12:08:13 crc kubenswrapper[5030]: I1128 12:08:13.147644 5030 generic.go:334] "Generic (PLEG): container finished" podID="a3fd10b8-6b32-4a76-80a1-14a3ea9b4985" containerID="e44036b19fb4af8d87190649b82d08d0da1052e769fc52d173279edc64105e50" exitCode=0 Nov 28 12:08:13 crc kubenswrapper[5030]: I1128 12:08:13.147733 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp" event={"ID":"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985","Type":"ContainerDied","Data":"e44036b19fb4af8d87190649b82d08d0da1052e769fc52d173279edc64105e50"} Nov 28 12:08:14 crc kubenswrapper[5030]: I1128 12:08:14.559620 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp" Nov 28 12:08:14 crc kubenswrapper[5030]: I1128 12:08:14.711131 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz6fz\" (UniqueName: \"kubernetes.io/projected/a3fd10b8-6b32-4a76-80a1-14a3ea9b4985-kube-api-access-mz6fz\") pod \"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985\" (UID: \"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985\") " Nov 28 12:08:14 crc kubenswrapper[5030]: I1128 12:08:14.711236 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3fd10b8-6b32-4a76-80a1-14a3ea9b4985-util\") pod \"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985\" (UID: \"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985\") " Nov 28 12:08:14 crc kubenswrapper[5030]: I1128 12:08:14.711323 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3fd10b8-6b32-4a76-80a1-14a3ea9b4985-bundle\") pod \"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985\" (UID: \"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985\") " Nov 28 12:08:14 crc kubenswrapper[5030]: I1128 12:08:14.712997 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3fd10b8-6b32-4a76-80a1-14a3ea9b4985-bundle" (OuterVolumeSpecName: "bundle") pod "a3fd10b8-6b32-4a76-80a1-14a3ea9b4985" (UID: "a3fd10b8-6b32-4a76-80a1-14a3ea9b4985"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:08:14 crc kubenswrapper[5030]: I1128 12:08:14.726663 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3fd10b8-6b32-4a76-80a1-14a3ea9b4985-kube-api-access-mz6fz" (OuterVolumeSpecName: "kube-api-access-mz6fz") pod "a3fd10b8-6b32-4a76-80a1-14a3ea9b4985" (UID: "a3fd10b8-6b32-4a76-80a1-14a3ea9b4985"). InnerVolumeSpecName "kube-api-access-mz6fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:08:14 crc kubenswrapper[5030]: I1128 12:08:14.748605 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3fd10b8-6b32-4a76-80a1-14a3ea9b4985-util" (OuterVolumeSpecName: "util") pod "a3fd10b8-6b32-4a76-80a1-14a3ea9b4985" (UID: "a3fd10b8-6b32-4a76-80a1-14a3ea9b4985"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:08:14 crc kubenswrapper[5030]: I1128 12:08:14.813237 5030 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a3fd10b8-6b32-4a76-80a1-14a3ea9b4985-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:08:14 crc kubenswrapper[5030]: I1128 12:08:14.813646 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz6fz\" (UniqueName: \"kubernetes.io/projected/a3fd10b8-6b32-4a76-80a1-14a3ea9b4985-kube-api-access-mz6fz\") on node \"crc\" DevicePath \"\"" Nov 28 12:08:14 crc kubenswrapper[5030]: I1128 12:08:14.813661 5030 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a3fd10b8-6b32-4a76-80a1-14a3ea9b4985-util\") on node \"crc\" DevicePath \"\"" Nov 28 12:08:15 crc kubenswrapper[5030]: I1128 12:08:15.171300 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp" event={"ID":"a3fd10b8-6b32-4a76-80a1-14a3ea9b4985","Type":"ContainerDied","Data":"afe68a50fa12fa350e405e09f9fc59f058a8ee617e0f09d0932f04096c0d0f04"} Nov 28 12:08:15 crc kubenswrapper[5030]: I1128 12:08:15.171377 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afe68a50fa12fa350e405e09f9fc59f058a8ee617e0f09d0932f04096c0d0f04" Nov 28 12:08:15 crc kubenswrapper[5030]: I1128 12:08:15.171541 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp" Nov 28 12:08:18 crc kubenswrapper[5030]: I1128 12:08:18.687185 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr"] Nov 28 12:08:18 crc kubenswrapper[5030]: E1128 12:08:18.687777 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3fd10b8-6b32-4a76-80a1-14a3ea9b4985" containerName="util" Nov 28 12:08:18 crc kubenswrapper[5030]: I1128 12:08:18.687790 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3fd10b8-6b32-4a76-80a1-14a3ea9b4985" containerName="util" Nov 28 12:08:18 crc kubenswrapper[5030]: E1128 12:08:18.687807 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3fd10b8-6b32-4a76-80a1-14a3ea9b4985" containerName="extract" Nov 28 12:08:18 crc kubenswrapper[5030]: I1128 12:08:18.687814 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3fd10b8-6b32-4a76-80a1-14a3ea9b4985" containerName="extract" Nov 28 12:08:18 crc kubenswrapper[5030]: E1128 12:08:18.687834 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3fd10b8-6b32-4a76-80a1-14a3ea9b4985" containerName="pull" Nov 28 12:08:18 crc kubenswrapper[5030]: I1128 12:08:18.687841 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3fd10b8-6b32-4a76-80a1-14a3ea9b4985" containerName="pull" Nov 28 12:08:18 crc kubenswrapper[5030]: I1128 12:08:18.687950 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3fd10b8-6b32-4a76-80a1-14a3ea9b4985" containerName="extract" Nov 28 12:08:18 crc kubenswrapper[5030]: I1128 12:08:18.688567 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr" Nov 28 12:08:18 crc kubenswrapper[5030]: I1128 12:08:18.691650 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Nov 28 12:08:18 crc kubenswrapper[5030]: I1128 12:08:18.692893 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-j2rwb" Nov 28 12:08:18 crc kubenswrapper[5030]: I1128 12:08:18.709915 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr"] Nov 28 12:08:18 crc kubenswrapper[5030]: I1128 12:08:18.874635 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc60e0cc-8fdc-4dd8-b191-2f2118e85785-webhook-cert\") pod \"infra-operator-controller-manager-58cc75b84f-rp7cr\" (UID: \"dc60e0cc-8fdc-4dd8-b191-2f2118e85785\") " pod="openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr" Nov 28 12:08:18 crc kubenswrapper[5030]: I1128 12:08:18.874701 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc60e0cc-8fdc-4dd8-b191-2f2118e85785-apiservice-cert\") pod \"infra-operator-controller-manager-58cc75b84f-rp7cr\" (UID: \"dc60e0cc-8fdc-4dd8-b191-2f2118e85785\") " pod="openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr" Nov 28 12:08:18 crc kubenswrapper[5030]: I1128 12:08:18.874739 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j29sk\" (UniqueName: \"kubernetes.io/projected/dc60e0cc-8fdc-4dd8-b191-2f2118e85785-kube-api-access-j29sk\") pod \"infra-operator-controller-manager-58cc75b84f-rp7cr\" (UID: \"dc60e0cc-8fdc-4dd8-b191-2f2118e85785\") " pod="openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr" Nov 28 12:08:18 crc kubenswrapper[5030]: I1128 12:08:18.976200 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc60e0cc-8fdc-4dd8-b191-2f2118e85785-apiservice-cert\") pod \"infra-operator-controller-manager-58cc75b84f-rp7cr\" (UID: \"dc60e0cc-8fdc-4dd8-b191-2f2118e85785\") " pod="openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr" Nov 28 12:08:18 crc kubenswrapper[5030]: I1128 12:08:18.976501 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j29sk\" (UniqueName: \"kubernetes.io/projected/dc60e0cc-8fdc-4dd8-b191-2f2118e85785-kube-api-access-j29sk\") pod \"infra-operator-controller-manager-58cc75b84f-rp7cr\" (UID: \"dc60e0cc-8fdc-4dd8-b191-2f2118e85785\") " pod="openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr" Nov 28 12:08:18 crc kubenswrapper[5030]: I1128 12:08:18.976660 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc60e0cc-8fdc-4dd8-b191-2f2118e85785-webhook-cert\") pod \"infra-operator-controller-manager-58cc75b84f-rp7cr\" (UID: \"dc60e0cc-8fdc-4dd8-b191-2f2118e85785\") " pod="openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr" Nov 28 12:08:18 crc kubenswrapper[5030]: I1128 12:08:18.983819 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc60e0cc-8fdc-4dd8-b191-2f2118e85785-webhook-cert\") pod \"infra-operator-controller-manager-58cc75b84f-rp7cr\" (UID: \"dc60e0cc-8fdc-4dd8-b191-2f2118e85785\") " pod="openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr" Nov 28 12:08:18 crc kubenswrapper[5030]: I1128 12:08:18.991182 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc60e0cc-8fdc-4dd8-b191-2f2118e85785-apiservice-cert\") pod \"infra-operator-controller-manager-58cc75b84f-rp7cr\" (UID: \"dc60e0cc-8fdc-4dd8-b191-2f2118e85785\") " pod="openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr" Nov 28 12:08:18 crc kubenswrapper[5030]: I1128 12:08:18.996668 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j29sk\" (UniqueName: \"kubernetes.io/projected/dc60e0cc-8fdc-4dd8-b191-2f2118e85785-kube-api-access-j29sk\") pod \"infra-operator-controller-manager-58cc75b84f-rp7cr\" (UID: \"dc60e0cc-8fdc-4dd8-b191-2f2118e85785\") " pod="openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr" Nov 28 12:08:19 crc kubenswrapper[5030]: I1128 12:08:19.019608 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr" Nov 28 12:08:19 crc kubenswrapper[5030]: I1128 12:08:19.325834 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr"] Nov 28 12:08:20 crc kubenswrapper[5030]: I1128 12:08:20.206011 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr" event={"ID":"dc60e0cc-8fdc-4dd8-b191-2f2118e85785","Type":"ContainerStarted","Data":"8bbdd7d5c127d5f7b3db7216c887fb9fda809b6a96fd10a3e1b2b6a325a707fb"} Nov 28 12:08:23 crc kubenswrapper[5030]: I1128 12:08:23.234629 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr" event={"ID":"dc60e0cc-8fdc-4dd8-b191-2f2118e85785","Type":"ContainerStarted","Data":"30d5134a5a9b8e37a6622b72ad8241a19c711d1c597e5b2747d698f8bbeba3ac"} Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.048710 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.050280 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.056147 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openshift-service-ca.crt" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.056460 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-config-data" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.057132 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"galera-openstack-dockercfg-g96cl" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.057418 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"kube-root-ca.crt" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.061648 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-scripts" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.067782 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.071043 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.071238 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.072660 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.097692 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.105228 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.148782 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.208261 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xrzq\" (UniqueName: \"kubernetes.io/projected/71bc6057-afa8-4d14-8007-63a195454497-kube-api-access-2xrzq\") pod \"openstack-galera-1\" (UID: \"71bc6057-afa8-4d14-8007-63a195454497\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.208349 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/58f32b69-3330-4888-85e8-b3e0b0eed50c-config-data-default\") pod \"openstack-galera-2\" (UID: \"58f32b69-3330-4888-85e8-b3e0b0eed50c\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.208419 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71bc6057-afa8-4d14-8007-63a195454497-operator-scripts\") pod \"openstack-galera-1\" (UID: \"71bc6057-afa8-4d14-8007-63a195454497\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.208516 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fc49197-af09-489d-a1cf-a6faef96e773-kolla-config\") pod \"openstack-galera-0\" (UID: \"1fc49197-af09-489d-a1cf-a6faef96e773\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.208555 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7nts\" (UniqueName: \"kubernetes.io/projected/58f32b69-3330-4888-85e8-b3e0b0eed50c-kube-api-access-r7nts\") pod \"openstack-galera-2\" (UID: \"58f32b69-3330-4888-85e8-b3e0b0eed50c\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.208825 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/58f32b69-3330-4888-85e8-b3e0b0eed50c-config-data-generated\") pod \"openstack-galera-2\" (UID: \"58f32b69-3330-4888-85e8-b3e0b0eed50c\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.208936 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fc49197-af09-489d-a1cf-a6faef96e773-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1fc49197-af09-489d-a1cf-a6faef96e773\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.208984 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/71bc6057-afa8-4d14-8007-63a195454497-config-data-generated\") pod \"openstack-galera-1\" (UID: \"71bc6057-afa8-4d14-8007-63a195454497\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.209021 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/58f32b69-3330-4888-85e8-b3e0b0eed50c-kolla-config\") pod \"openstack-galera-2\" (UID: \"58f32b69-3330-4888-85e8-b3e0b0eed50c\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.209101 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/71bc6057-afa8-4d14-8007-63a195454497-kolla-config\") pod \"openstack-galera-1\" (UID: \"71bc6057-afa8-4d14-8007-63a195454497\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.209243 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1fc49197-af09-489d-a1cf-a6faef96e773-config-data-default\") pod \"openstack-galera-0\" (UID: \"1fc49197-af09-489d-a1cf-a6faef96e773\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.209293 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-1\" (UID: \"71bc6057-afa8-4d14-8007-63a195454497\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.209332 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1fc49197-af09-489d-a1cf-a6faef96e773\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.209373 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1fc49197-af09-489d-a1cf-a6faef96e773-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1fc49197-af09-489d-a1cf-a6faef96e773\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.209406 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djjdq\" (UniqueName: \"kubernetes.io/projected/1fc49197-af09-489d-a1cf-a6faef96e773-kube-api-access-djjdq\") pod \"openstack-galera-0\" (UID: \"1fc49197-af09-489d-a1cf-a6faef96e773\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.209518 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58f32b69-3330-4888-85e8-b3e0b0eed50c-operator-scripts\") pod \"openstack-galera-2\" (UID: \"58f32b69-3330-4888-85e8-b3e0b0eed50c\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.209740 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/71bc6057-afa8-4d14-8007-63a195454497-config-data-default\") pod \"openstack-galera-1\" (UID: \"71bc6057-afa8-4d14-8007-63a195454497\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.209848 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"openstack-galera-2\" (UID: \"58f32b69-3330-4888-85e8-b3e0b0eed50c\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.264822 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr" event={"ID":"dc60e0cc-8fdc-4dd8-b191-2f2118e85785","Type":"ContainerStarted","Data":"cdf71298c395f93577117f6090436dfacd00912ac2fdd7b45031f6d064ac9f8f"} Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.265184 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.271636 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.300010 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-58cc75b84f-rp7cr" podStartSLOduration=2.155884679 podStartE2EDuration="9.299980893s" podCreationTimestamp="2025-11-28 12:08:18 +0000 UTC" firstStartedPulling="2025-11-28 12:08:19.334417006 +0000 UTC m=+917.276159689" lastFinishedPulling="2025-11-28 12:08:26.47851322 +0000 UTC m=+924.420255903" observedRunningTime="2025-11-28 12:08:27.297261389 +0000 UTC m=+925.239004112" watchObservedRunningTime="2025-11-28 12:08:27.299980893 +0000 UTC m=+925.241723606" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.311186 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71bc6057-afa8-4d14-8007-63a195454497-operator-scripts\") pod \"openstack-galera-1\" (UID: \"71bc6057-afa8-4d14-8007-63a195454497\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.311258 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fc49197-af09-489d-a1cf-a6faef96e773-kolla-config\") pod \"openstack-galera-0\" (UID: \"1fc49197-af09-489d-a1cf-a6faef96e773\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.311314 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7nts\" (UniqueName: \"kubernetes.io/projected/58f32b69-3330-4888-85e8-b3e0b0eed50c-kube-api-access-r7nts\") pod \"openstack-galera-2\" (UID: \"58f32b69-3330-4888-85e8-b3e0b0eed50c\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.311361 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/58f32b69-3330-4888-85e8-b3e0b0eed50c-config-data-generated\") pod \"openstack-galera-2\" (UID: \"58f32b69-3330-4888-85e8-b3e0b0eed50c\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.311391 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/71bc6057-afa8-4d14-8007-63a195454497-config-data-generated\") pod \"openstack-galera-1\" (UID: \"71bc6057-afa8-4d14-8007-63a195454497\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.311414 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/58f32b69-3330-4888-85e8-b3e0b0eed50c-kolla-config\") pod \"openstack-galera-2\" (UID: \"58f32b69-3330-4888-85e8-b3e0b0eed50c\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.311439 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fc49197-af09-489d-a1cf-a6faef96e773-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1fc49197-af09-489d-a1cf-a6faef96e773\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.311490 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/71bc6057-afa8-4d14-8007-63a195454497-kolla-config\") pod \"openstack-galera-1\" (UID: \"71bc6057-afa8-4d14-8007-63a195454497\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.311531 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1fc49197-af09-489d-a1cf-a6faef96e773-config-data-default\") pod \"openstack-galera-0\" (UID: \"1fc49197-af09-489d-a1cf-a6faef96e773\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.311561 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-1\" (UID: \"71bc6057-afa8-4d14-8007-63a195454497\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.311586 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1fc49197-af09-489d-a1cf-a6faef96e773\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.311608 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1fc49197-af09-489d-a1cf-a6faef96e773-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1fc49197-af09-489d-a1cf-a6faef96e773\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.311631 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djjdq\" (UniqueName: \"kubernetes.io/projected/1fc49197-af09-489d-a1cf-a6faef96e773-kube-api-access-djjdq\") pod \"openstack-galera-0\" (UID: \"1fc49197-af09-489d-a1cf-a6faef96e773\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.311657 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58f32b69-3330-4888-85e8-b3e0b0eed50c-operator-scripts\") pod \"openstack-galera-2\" (UID: \"58f32b69-3330-4888-85e8-b3e0b0eed50c\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.311688 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/71bc6057-afa8-4d14-8007-63a195454497-config-data-default\") pod \"openstack-galera-1\" (UID: \"71bc6057-afa8-4d14-8007-63a195454497\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.311723 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"openstack-galera-2\" (UID: \"58f32b69-3330-4888-85e8-b3e0b0eed50c\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.311756 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xrzq\" (UniqueName: \"kubernetes.io/projected/71bc6057-afa8-4d14-8007-63a195454497-kube-api-access-2xrzq\") pod \"openstack-galera-1\" (UID: \"71bc6057-afa8-4d14-8007-63a195454497\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.312143 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/58f32b69-3330-4888-85e8-b3e0b0eed50c-config-data-generated\") pod \"openstack-galera-2\" (UID: \"58f32b69-3330-4888-85e8-b3e0b0eed50c\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.312221 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fc49197-af09-489d-a1cf-a6faef96e773-kolla-config\") pod \"openstack-galera-0\" (UID: \"1fc49197-af09-489d-a1cf-a6faef96e773\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.312268 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-1\" (UID: \"71bc6057-afa8-4d14-8007-63a195454497\") device mount path \"/mnt/openstack/pv06\"" pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.312405 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1fc49197-af09-489d-a1cf-a6faef96e773-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1fc49197-af09-489d-a1cf-a6faef96e773\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.312628 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1fc49197-af09-489d-a1cf-a6faef96e773\") device mount path \"/mnt/openstack/pv07\"" pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.312639 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"openstack-galera-2\" (UID: \"58f32b69-3330-4888-85e8-b3e0b0eed50c\") device mount path \"/mnt/openstack/pv15\"" pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.313272 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/71bc6057-afa8-4d14-8007-63a195454497-config-data-generated\") pod \"openstack-galera-1\" (UID: \"71bc6057-afa8-4d14-8007-63a195454497\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.313413 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/58f32b69-3330-4888-85e8-b3e0b0eed50c-kolla-config\") pod \"openstack-galera-2\" (UID: \"58f32b69-3330-4888-85e8-b3e0b0eed50c\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.314225 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58f32b69-3330-4888-85e8-b3e0b0eed50c-operator-scripts\") pod \"openstack-galera-2\" (UID: \"58f32b69-3330-4888-85e8-b3e0b0eed50c\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.314310 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/58f32b69-3330-4888-85e8-b3e0b0eed50c-config-data-default\") pod \"openstack-galera-2\" (UID: \"58f32b69-3330-4888-85e8-b3e0b0eed50c\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.314532 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/58f32b69-3330-4888-85e8-b3e0b0eed50c-config-data-default\") pod \"openstack-galera-2\" (UID: \"58f32b69-3330-4888-85e8-b3e0b0eed50c\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.315361 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1fc49197-af09-489d-a1cf-a6faef96e773-config-data-default\") pod \"openstack-galera-0\" (UID: \"1fc49197-af09-489d-a1cf-a6faef96e773\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.316072 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/71bc6057-afa8-4d14-8007-63a195454497-config-data-default\") pod \"openstack-galera-1\" (UID: \"71bc6057-afa8-4d14-8007-63a195454497\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.316436 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/71bc6057-afa8-4d14-8007-63a195454497-kolla-config\") pod \"openstack-galera-1\" (UID: \"71bc6057-afa8-4d14-8007-63a195454497\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.319812 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fc49197-af09-489d-a1cf-a6faef96e773-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1fc49197-af09-489d-a1cf-a6faef96e773\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.320348 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71bc6057-afa8-4d14-8007-63a195454497-operator-scripts\") pod \"openstack-galera-1\" (UID: \"71bc6057-afa8-4d14-8007-63a195454497\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.344321 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7nts\" (UniqueName: \"kubernetes.io/projected/58f32b69-3330-4888-85e8-b3e0b0eed50c-kube-api-access-r7nts\") pod \"openstack-galera-2\" (UID: \"58f32b69-3330-4888-85e8-b3e0b0eed50c\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.344333 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-1\" (UID: \"71bc6057-afa8-4d14-8007-63a195454497\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.350775 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1fc49197-af09-489d-a1cf-a6faef96e773\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.356251 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djjdq\" (UniqueName: \"kubernetes.io/projected/1fc49197-af09-489d-a1cf-a6faef96e773-kube-api-access-djjdq\") pod \"openstack-galera-0\" (UID: \"1fc49197-af09-489d-a1cf-a6faef96e773\") " pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.368191 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xrzq\" (UniqueName: \"kubernetes.io/projected/71bc6057-afa8-4d14-8007-63a195454497-kube-api-access-2xrzq\") pod \"openstack-galera-1\" (UID: \"71bc6057-afa8-4d14-8007-63a195454497\") " pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.377156 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.379350 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"openstack-galera-2\" (UID: \"58f32b69-3330-4888-85e8-b3e0b0eed50c\") " pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.390807 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.399391 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.886741 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.929025 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Nov 28 12:08:27 crc kubenswrapper[5030]: W1128 12:08:27.953668 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fc49197_af09_489d_a1cf_a6faef96e773.slice/crio-0369c0a81e8f3be912bf87ca43f4fa2edd86d4b5b4e46d8143a1198595965e74 WatchSource:0}: Error finding container 0369c0a81e8f3be912bf87ca43f4fa2edd86d4b5b4e46d8143a1198595965e74: Status 404 returned error can't find the container with id 0369c0a81e8f3be912bf87ca43f4fa2edd86d4b5b4e46d8143a1198595965e74 Nov 28 12:08:27 crc kubenswrapper[5030]: I1128 12:08:27.984588 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Nov 28 12:08:27 crc kubenswrapper[5030]: W1128 12:08:27.998236 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71bc6057_afa8_4d14_8007_63a195454497.slice/crio-b6845f60110499db37e369fae4d36b44d232439b80cf4ed16b903a1179dce676 WatchSource:0}: Error finding container b6845f60110499db37e369fae4d36b44d232439b80cf4ed16b903a1179dce676: Status 404 returned error can't find the container with id b6845f60110499db37e369fae4d36b44d232439b80cf4ed16b903a1179dce676 Nov 28 12:08:28 crc kubenswrapper[5030]: I1128 12:08:28.274370 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"71bc6057-afa8-4d14-8007-63a195454497","Type":"ContainerStarted","Data":"b6845f60110499db37e369fae4d36b44d232439b80cf4ed16b903a1179dce676"} Nov 28 12:08:28 crc kubenswrapper[5030]: I1128 12:08:28.276122 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"58f32b69-3330-4888-85e8-b3e0b0eed50c","Type":"ContainerStarted","Data":"0c3b9bc77bcbf4a7c011a9a8bfb6d288edcba046971342bec422fc6fe1a346d8"} Nov 28 12:08:28 crc kubenswrapper[5030]: I1128 12:08:28.277948 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"1fc49197-af09-489d-a1cf-a6faef96e773","Type":"ContainerStarted","Data":"0369c0a81e8f3be912bf87ca43f4fa2edd86d4b5b4e46d8143a1198595965e74"} Nov 28 12:08:31 crc kubenswrapper[5030]: I1128 12:08:31.656724 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-clrrq"] Nov 28 12:08:31 crc kubenswrapper[5030]: I1128 12:08:31.660419 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-clrrq" Nov 28 12:08:31 crc kubenswrapper[5030]: I1128 12:08:31.670451 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-clrrq"] Nov 28 12:08:31 crc kubenswrapper[5030]: I1128 12:08:31.700892 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-jzq6f" Nov 28 12:08:31 crc kubenswrapper[5030]: I1128 12:08:31.803083 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4fvj\" (UniqueName: \"kubernetes.io/projected/6113ca86-03fd-4525-a4a1-fa7e2a5f9173-kube-api-access-k4fvj\") pod \"rabbitmq-cluster-operator-index-clrrq\" (UID: \"6113ca86-03fd-4525-a4a1-fa7e2a5f9173\") " pod="openstack-operators/rabbitmq-cluster-operator-index-clrrq" Nov 28 12:08:31 crc kubenswrapper[5030]: I1128 12:08:31.905525 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4fvj\" (UniqueName: \"kubernetes.io/projected/6113ca86-03fd-4525-a4a1-fa7e2a5f9173-kube-api-access-k4fvj\") pod \"rabbitmq-cluster-operator-index-clrrq\" (UID: \"6113ca86-03fd-4525-a4a1-fa7e2a5f9173\") " pod="openstack-operators/rabbitmq-cluster-operator-index-clrrq" Nov 28 12:08:31 crc kubenswrapper[5030]: I1128 12:08:31.928491 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4fvj\" (UniqueName: \"kubernetes.io/projected/6113ca86-03fd-4525-a4a1-fa7e2a5f9173-kube-api-access-k4fvj\") pod \"rabbitmq-cluster-operator-index-clrrq\" (UID: \"6113ca86-03fd-4525-a4a1-fa7e2a5f9173\") " pod="openstack-operators/rabbitmq-cluster-operator-index-clrrq" Nov 28 12:08:32 crc kubenswrapper[5030]: I1128 12:08:32.028359 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-clrrq" Nov 28 12:08:33 crc kubenswrapper[5030]: I1128 12:08:33.201631 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:08:33 crc kubenswrapper[5030]: I1128 12:08:33.201703 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:08:36 crc kubenswrapper[5030]: I1128 12:08:36.650230 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-clrrq"] Nov 28 12:08:37 crc kubenswrapper[5030]: I1128 12:08:37.460317 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-78v5c"] Nov 28 12:08:37 crc kubenswrapper[5030]: I1128 12:08:37.461758 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-78v5c" Nov 28 12:08:37 crc kubenswrapper[5030]: I1128 12:08:37.487509 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-78v5c"] Nov 28 12:08:37 crc kubenswrapper[5030]: I1128 12:08:37.529779 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs6fh\" (UniqueName: \"kubernetes.io/projected/a40eab08-1542-4a5a-b92f-ad99f4a6e6a3-kube-api-access-cs6fh\") pod \"rabbitmq-cluster-operator-index-78v5c\" (UID: \"a40eab08-1542-4a5a-b92f-ad99f4a6e6a3\") " pod="openstack-operators/rabbitmq-cluster-operator-index-78v5c" Nov 28 12:08:37 crc kubenswrapper[5030]: I1128 12:08:37.631936 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs6fh\" (UniqueName: \"kubernetes.io/projected/a40eab08-1542-4a5a-b92f-ad99f4a6e6a3-kube-api-access-cs6fh\") pod \"rabbitmq-cluster-operator-index-78v5c\" (UID: \"a40eab08-1542-4a5a-b92f-ad99f4a6e6a3\") " pod="openstack-operators/rabbitmq-cluster-operator-index-78v5c" Nov 28 12:08:37 crc kubenswrapper[5030]: I1128 12:08:37.658184 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs6fh\" (UniqueName: \"kubernetes.io/projected/a40eab08-1542-4a5a-b92f-ad99f4a6e6a3-kube-api-access-cs6fh\") pod \"rabbitmq-cluster-operator-index-78v5c\" (UID: \"a40eab08-1542-4a5a-b92f-ad99f4a6e6a3\") " pod="openstack-operators/rabbitmq-cluster-operator-index-78v5c" Nov 28 12:08:37 crc kubenswrapper[5030]: I1128 12:08:37.804558 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-78v5c" Nov 28 12:08:38 crc kubenswrapper[5030]: I1128 12:08:38.545529 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-clrrq"] Nov 28 12:08:38 crc kubenswrapper[5030]: W1128 12:08:38.562668 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6113ca86_03fd_4525_a4a1_fa7e2a5f9173.slice/crio-d5829ed670e22d6ce905e2e6d533ad434c9a97e03bcf2652493062212fcd8f05 WatchSource:0}: Error finding container d5829ed670e22d6ce905e2e6d533ad434c9a97e03bcf2652493062212fcd8f05: Status 404 returned error can't find the container with id d5829ed670e22d6ce905e2e6d533ad434c9a97e03bcf2652493062212fcd8f05 Nov 28 12:08:38 crc kubenswrapper[5030]: I1128 12:08:38.666159 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tsppw"] Nov 28 12:08:38 crc kubenswrapper[5030]: I1128 12:08:38.667913 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsppw" Nov 28 12:08:38 crc kubenswrapper[5030]: I1128 12:08:38.680765 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tsppw"] Nov 28 12:08:38 crc kubenswrapper[5030]: I1128 12:08:38.753346 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg84h\" (UniqueName: \"kubernetes.io/projected/e0e0ea75-97b8-489e-81ec-a74cf4f0daa9-kube-api-access-gg84h\") pod \"community-operators-tsppw\" (UID: \"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9\") " pod="openshift-marketplace/community-operators-tsppw" Nov 28 12:08:38 crc kubenswrapper[5030]: I1128 12:08:38.753438 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0e0ea75-97b8-489e-81ec-a74cf4f0daa9-utilities\") pod \"community-operators-tsppw\" (UID: \"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9\") " pod="openshift-marketplace/community-operators-tsppw" Nov 28 12:08:38 crc kubenswrapper[5030]: I1128 12:08:38.753564 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0e0ea75-97b8-489e-81ec-a74cf4f0daa9-catalog-content\") pod \"community-operators-tsppw\" (UID: \"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9\") " pod="openshift-marketplace/community-operators-tsppw" Nov 28 12:08:38 crc kubenswrapper[5030]: I1128 12:08:38.797807 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-78v5c"] Nov 28 12:08:38 crc kubenswrapper[5030]: W1128 12:08:38.803846 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda40eab08_1542_4a5a_b92f_ad99f4a6e6a3.slice/crio-0012faea006d2a7805a27d2449f61f86041423029de0129fbd04e8dbcba3c70e WatchSource:0}: Error finding container 0012faea006d2a7805a27d2449f61f86041423029de0129fbd04e8dbcba3c70e: Status 404 returned error can't find the container with id 0012faea006d2a7805a27d2449f61f86041423029de0129fbd04e8dbcba3c70e Nov 28 12:08:38 crc kubenswrapper[5030]: I1128 12:08:38.857114 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg84h\" (UniqueName: \"kubernetes.io/projected/e0e0ea75-97b8-489e-81ec-a74cf4f0daa9-kube-api-access-gg84h\") pod \"community-operators-tsppw\" (UID: \"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9\") " pod="openshift-marketplace/community-operators-tsppw" Nov 28 12:08:38 crc kubenswrapper[5030]: I1128 12:08:38.857166 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0e0ea75-97b8-489e-81ec-a74cf4f0daa9-utilities\") pod \"community-operators-tsppw\" (UID: \"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9\") " pod="openshift-marketplace/community-operators-tsppw" Nov 28 12:08:38 crc kubenswrapper[5030]: I1128 12:08:38.857217 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0e0ea75-97b8-489e-81ec-a74cf4f0daa9-catalog-content\") pod \"community-operators-tsppw\" (UID: \"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9\") " pod="openshift-marketplace/community-operators-tsppw" Nov 28 12:08:38 crc kubenswrapper[5030]: I1128 12:08:38.857740 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0e0ea75-97b8-489e-81ec-a74cf4f0daa9-utilities\") pod \"community-operators-tsppw\" (UID: \"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9\") " pod="openshift-marketplace/community-operators-tsppw" Nov 28 12:08:38 crc kubenswrapper[5030]: I1128 12:08:38.857773 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0e0ea75-97b8-489e-81ec-a74cf4f0daa9-catalog-content\") pod \"community-operators-tsppw\" (UID: \"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9\") " pod="openshift-marketplace/community-operators-tsppw" Nov 28 12:08:38 crc kubenswrapper[5030]: I1128 12:08:38.881155 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg84h\" (UniqueName: \"kubernetes.io/projected/e0e0ea75-97b8-489e-81ec-a74cf4f0daa9-kube-api-access-gg84h\") pod \"community-operators-tsppw\" (UID: \"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9\") " pod="openshift-marketplace/community-operators-tsppw" Nov 28 12:08:39 crc kubenswrapper[5030]: I1128 12:08:39.056212 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsppw" Nov 28 12:08:39 crc kubenswrapper[5030]: I1128 12:08:39.328082 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tsppw"] Nov 28 12:08:39 crc kubenswrapper[5030]: I1128 12:08:39.380911 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"71bc6057-afa8-4d14-8007-63a195454497","Type":"ContainerStarted","Data":"ddac65284fa63c5e2ac09176e0d03349d949b15fd083d7ceb2ec60f404ed826b"} Nov 28 12:08:39 crc kubenswrapper[5030]: I1128 12:08:39.385864 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"58f32b69-3330-4888-85e8-b3e0b0eed50c","Type":"ContainerStarted","Data":"fe68d1510005608722da0fc4f0c7180278d424db524b2919e6e2def740703078"} Nov 28 12:08:39 crc kubenswrapper[5030]: I1128 12:08:39.387183 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-78v5c" event={"ID":"a40eab08-1542-4a5a-b92f-ad99f4a6e6a3","Type":"ContainerStarted","Data":"0012faea006d2a7805a27d2449f61f86041423029de0129fbd04e8dbcba3c70e"} Nov 28 12:08:39 crc kubenswrapper[5030]: I1128 12:08:39.388295 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-clrrq" event={"ID":"6113ca86-03fd-4525-a4a1-fa7e2a5f9173","Type":"ContainerStarted","Data":"d5829ed670e22d6ce905e2e6d533ad434c9a97e03bcf2652493062212fcd8f05"} Nov 28 12:08:39 crc kubenswrapper[5030]: I1128 12:08:39.389910 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"1fc49197-af09-489d-a1cf-a6faef96e773","Type":"ContainerStarted","Data":"ea79aa16b5feb6bdeae6082e1b6201bb02dff97a1318840687b12d0f5471b92a"} Nov 28 12:08:41 crc kubenswrapper[5030]: W1128 12:08:41.128907 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0e0ea75_97b8_489e_81ec_a74cf4f0daa9.slice/crio-21eae9a944cfc59e5074fc756a211616d785c6b97115b53c7b4a6ad3fcadd063 WatchSource:0}: Error finding container 21eae9a944cfc59e5074fc756a211616d785c6b97115b53c7b4a6ad3fcadd063: Status 404 returned error can't find the container with id 21eae9a944cfc59e5074fc756a211616d785c6b97115b53c7b4a6ad3fcadd063 Nov 28 12:08:41 crc kubenswrapper[5030]: I1128 12:08:41.421397 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsppw" event={"ID":"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9","Type":"ContainerStarted","Data":"21eae9a944cfc59e5074fc756a211616d785c6b97115b53c7b4a6ad3fcadd063"} Nov 28 12:08:42 crc kubenswrapper[5030]: I1128 12:08:42.443003 5030 generic.go:334] "Generic (PLEG): container finished" podID="e0e0ea75-97b8-489e-81ec-a74cf4f0daa9" containerID="43fe79284e6d131703297ae8d63906109e3cd2dd33384182c4b0a4248d6314e2" exitCode=0 Nov 28 12:08:42 crc kubenswrapper[5030]: I1128 12:08:42.443355 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsppw" event={"ID":"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9","Type":"ContainerDied","Data":"43fe79284e6d131703297ae8d63906109e3cd2dd33384182c4b0a4248d6314e2"} Nov 28 12:08:44 crc kubenswrapper[5030]: I1128 12:08:44.462199 5030 generic.go:334] "Generic (PLEG): container finished" podID="1fc49197-af09-489d-a1cf-a6faef96e773" containerID="ea79aa16b5feb6bdeae6082e1b6201bb02dff97a1318840687b12d0f5471b92a" exitCode=0 Nov 28 12:08:44 crc kubenswrapper[5030]: I1128 12:08:44.462360 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"1fc49197-af09-489d-a1cf-a6faef96e773","Type":"ContainerDied","Data":"ea79aa16b5feb6bdeae6082e1b6201bb02dff97a1318840687b12d0f5471b92a"} Nov 28 12:08:44 crc kubenswrapper[5030]: I1128 12:08:44.467307 5030 generic.go:334] "Generic (PLEG): container finished" podID="71bc6057-afa8-4d14-8007-63a195454497" containerID="ddac65284fa63c5e2ac09176e0d03349d949b15fd083d7ceb2ec60f404ed826b" exitCode=0 Nov 28 12:08:44 crc kubenswrapper[5030]: I1128 12:08:44.467843 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"71bc6057-afa8-4d14-8007-63a195454497","Type":"ContainerDied","Data":"ddac65284fa63c5e2ac09176e0d03349d949b15fd083d7ceb2ec60f404ed826b"} Nov 28 12:08:44 crc kubenswrapper[5030]: I1128 12:08:44.472300 5030 generic.go:334] "Generic (PLEG): container finished" podID="58f32b69-3330-4888-85e8-b3e0b0eed50c" containerID="fe68d1510005608722da0fc4f0c7180278d424db524b2919e6e2def740703078" exitCode=0 Nov 28 12:08:44 crc kubenswrapper[5030]: I1128 12:08:44.472383 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"58f32b69-3330-4888-85e8-b3e0b0eed50c","Type":"ContainerDied","Data":"fe68d1510005608722da0fc4f0c7180278d424db524b2919e6e2def740703078"} Nov 28 12:08:44 crc kubenswrapper[5030]: I1128 12:08:44.482785 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-78v5c" event={"ID":"a40eab08-1542-4a5a-b92f-ad99f4a6e6a3","Type":"ContainerStarted","Data":"627ef79ca380513001da3383a6fa6d81cfd544afa9040666ee5fe71349d7a235"} Nov 28 12:08:44 crc kubenswrapper[5030]: I1128 12:08:44.489615 5030 generic.go:334] "Generic (PLEG): container finished" podID="e0e0ea75-97b8-489e-81ec-a74cf4f0daa9" containerID="c59570754dcaf2adc23f7ab4eb806a82b3b05cc29229d2adb49efd43b92617cb" exitCode=0 Nov 28 12:08:44 crc kubenswrapper[5030]: I1128 12:08:44.489734 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsppw" event={"ID":"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9","Type":"ContainerDied","Data":"c59570754dcaf2adc23f7ab4eb806a82b3b05cc29229d2adb49efd43b92617cb"} Nov 28 12:08:44 crc kubenswrapper[5030]: I1128 12:08:44.499206 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-clrrq" event={"ID":"6113ca86-03fd-4525-a4a1-fa7e2a5f9173","Type":"ContainerStarted","Data":"31e06df92b3fdc753478586ed24249cd230968e131b574dfcea8a4689036c87d"} Nov 28 12:08:44 crc kubenswrapper[5030]: I1128 12:08:44.499423 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-clrrq" podUID="6113ca86-03fd-4525-a4a1-fa7e2a5f9173" containerName="registry-server" containerID="cri-o://31e06df92b3fdc753478586ed24249cd230968e131b574dfcea8a4689036c87d" gracePeriod=2 Nov 28 12:08:44 crc kubenswrapper[5030]: I1128 12:08:44.588833 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-78v5c" podStartSLOduration=2.68228474 podStartE2EDuration="7.588804153s" podCreationTimestamp="2025-11-28 12:08:37 +0000 UTC" firstStartedPulling="2025-11-28 12:08:38.807597431 +0000 UTC m=+936.749340114" lastFinishedPulling="2025-11-28 12:08:43.714116834 +0000 UTC m=+941.655859527" observedRunningTime="2025-11-28 12:08:44.584442956 +0000 UTC m=+942.526185629" watchObservedRunningTime="2025-11-28 12:08:44.588804153 +0000 UTC m=+942.530546836" Nov 28 12:08:44 crc kubenswrapper[5030]: I1128 12:08:44.639392 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-clrrq" podStartSLOduration=8.510787416 podStartE2EDuration="13.639365049s" podCreationTimestamp="2025-11-28 12:08:31 +0000 UTC" firstStartedPulling="2025-11-28 12:08:38.564664817 +0000 UTC m=+936.506407500" lastFinishedPulling="2025-11-28 12:08:43.69324243 +0000 UTC m=+941.634985133" observedRunningTime="2025-11-28 12:08:44.629274637 +0000 UTC m=+942.571017320" watchObservedRunningTime="2025-11-28 12:08:44.639365049 +0000 UTC m=+942.581107732" Nov 28 12:08:44 crc kubenswrapper[5030]: I1128 12:08:44.969294 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-clrrq" Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.077964 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4fvj\" (UniqueName: \"kubernetes.io/projected/6113ca86-03fd-4525-a4a1-fa7e2a5f9173-kube-api-access-k4fvj\") pod \"6113ca86-03fd-4525-a4a1-fa7e2a5f9173\" (UID: \"6113ca86-03fd-4525-a4a1-fa7e2a5f9173\") " Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.088533 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6113ca86-03fd-4525-a4a1-fa7e2a5f9173-kube-api-access-k4fvj" (OuterVolumeSpecName: "kube-api-access-k4fvj") pod "6113ca86-03fd-4525-a4a1-fa7e2a5f9173" (UID: "6113ca86-03fd-4525-a4a1-fa7e2a5f9173"). InnerVolumeSpecName "kube-api-access-k4fvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.181084 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4fvj\" (UniqueName: \"kubernetes.io/projected/6113ca86-03fd-4525-a4a1-fa7e2a5f9173-kube-api-access-k4fvj\") on node \"crc\" DevicePath \"\"" Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.521365 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"58f32b69-3330-4888-85e8-b3e0b0eed50c","Type":"ContainerStarted","Data":"b8cd158e6fb5cdd63de2ecd06fcd4850c1020169e8c36194635aebb5d0fbb70c"} Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.532667 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsppw" event={"ID":"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9","Type":"ContainerStarted","Data":"b089be948d4f8431f45411215b352805408fd154144358b3d1d1ff1cb3f37291"} Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.536089 5030 generic.go:334] "Generic (PLEG): container finished" podID="6113ca86-03fd-4525-a4a1-fa7e2a5f9173" containerID="31e06df92b3fdc753478586ed24249cd230968e131b574dfcea8a4689036c87d" exitCode=0 Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.536200 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-clrrq" event={"ID":"6113ca86-03fd-4525-a4a1-fa7e2a5f9173","Type":"ContainerDied","Data":"31e06df92b3fdc753478586ed24249cd230968e131b574dfcea8a4689036c87d"} Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.536279 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-clrrq" event={"ID":"6113ca86-03fd-4525-a4a1-fa7e2a5f9173","Type":"ContainerDied","Data":"d5829ed670e22d6ce905e2e6d533ad434c9a97e03bcf2652493062212fcd8f05"} Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.536311 5030 scope.go:117] "RemoveContainer" containerID="31e06df92b3fdc753478586ed24249cd230968e131b574dfcea8a4689036c87d" Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.536535 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-clrrq" Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.551030 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"1fc49197-af09-489d-a1cf-a6faef96e773","Type":"ContainerStarted","Data":"ee91acda75b0f0e85c2167869fa4050eef885a89375031becb6fc46c2bde6c90"} Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.562987 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"71bc6057-afa8-4d14-8007-63a195454497","Type":"ContainerStarted","Data":"ae3858f6707636086e615a99c56d237536b572d4b138e903cfe6ec1a241dc78b"} Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.564092 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstack-galera-2" podStartSLOduration=9.047346752 podStartE2EDuration="19.564063021s" podCreationTimestamp="2025-11-28 12:08:26 +0000 UTC" firstStartedPulling="2025-11-28 12:08:27.895383738 +0000 UTC m=+925.837126421" lastFinishedPulling="2025-11-28 12:08:38.412100007 +0000 UTC m=+936.353842690" observedRunningTime="2025-11-28 12:08:45.558323915 +0000 UTC m=+943.500066608" watchObservedRunningTime="2025-11-28 12:08:45.564063021 +0000 UTC m=+943.505805714" Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.588721 5030 scope.go:117] "RemoveContainer" containerID="31e06df92b3fdc753478586ed24249cd230968e131b574dfcea8a4689036c87d" Nov 28 12:08:45 crc kubenswrapper[5030]: E1128 12:08:45.593542 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31e06df92b3fdc753478586ed24249cd230968e131b574dfcea8a4689036c87d\": container with ID starting with 31e06df92b3fdc753478586ed24249cd230968e131b574dfcea8a4689036c87d not found: ID does not exist" containerID="31e06df92b3fdc753478586ed24249cd230968e131b574dfcea8a4689036c87d" Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.593594 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e06df92b3fdc753478586ed24249cd230968e131b574dfcea8a4689036c87d"} err="failed to get container status \"31e06df92b3fdc753478586ed24249cd230968e131b574dfcea8a4689036c87d\": rpc error: code = NotFound desc = could not find container \"31e06df92b3fdc753478586ed24249cd230968e131b574dfcea8a4689036c87d\": container with ID starting with 31e06df92b3fdc753478586ed24249cd230968e131b574dfcea8a4689036c87d not found: ID does not exist" Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.602877 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstack-galera-0" podStartSLOduration=9.161171587 podStartE2EDuration="19.602852909s" podCreationTimestamp="2025-11-28 12:08:26 +0000 UTC" firstStartedPulling="2025-11-28 12:08:27.957865046 +0000 UTC m=+925.899607729" lastFinishedPulling="2025-11-28 12:08:38.399546367 +0000 UTC m=+936.341289051" observedRunningTime="2025-11-28 12:08:45.598105181 +0000 UTC m=+943.539847874" watchObservedRunningTime="2025-11-28 12:08:45.602852909 +0000 UTC m=+943.544595602" Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.636529 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tsppw" podStartSLOduration=5.04141519 podStartE2EDuration="7.636502538s" podCreationTimestamp="2025-11-28 12:08:38 +0000 UTC" firstStartedPulling="2025-11-28 12:08:42.462252314 +0000 UTC m=+940.403994997" lastFinishedPulling="2025-11-28 12:08:45.057339662 +0000 UTC m=+942.999082345" observedRunningTime="2025-11-28 12:08:45.633779224 +0000 UTC m=+943.575521917" watchObservedRunningTime="2025-11-28 12:08:45.636502538 +0000 UTC m=+943.578245231" Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.651797 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-clrrq"] Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.657806 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-clrrq"] Nov 28 12:08:45 crc kubenswrapper[5030]: I1128 12:08:45.668701 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstack-galera-1" podStartSLOduration=9.250433649 podStartE2EDuration="19.668683697s" podCreationTimestamp="2025-11-28 12:08:26 +0000 UTC" firstStartedPulling="2025-11-28 12:08:28.001378382 +0000 UTC m=+925.943121065" lastFinishedPulling="2025-11-28 12:08:38.41962843 +0000 UTC m=+936.361371113" observedRunningTime="2025-11-28 12:08:45.667022592 +0000 UTC m=+943.608765295" watchObservedRunningTime="2025-11-28 12:08:45.668683697 +0000 UTC m=+943.610426380" Nov 28 12:08:46 crc kubenswrapper[5030]: I1128 12:08:46.406289 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6113ca86-03fd-4525-a4a1-fa7e2a5f9173" path="/var/lib/kubelet/pods/6113ca86-03fd-4525-a4a1-fa7e2a5f9173/volumes" Nov 28 12:08:47 crc kubenswrapper[5030]: I1128 12:08:47.377853 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:47 crc kubenswrapper[5030]: I1128 12:08:47.378577 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:08:47 crc kubenswrapper[5030]: I1128 12:08:47.392180 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:47 crc kubenswrapper[5030]: I1128 12:08:47.392717 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:47 crc kubenswrapper[5030]: I1128 12:08:47.400136 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:47 crc kubenswrapper[5030]: I1128 12:08:47.400200 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:08:47 crc kubenswrapper[5030]: I1128 12:08:47.805258 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-78v5c" Nov 28 12:08:47 crc kubenswrapper[5030]: I1128 12:08:47.805356 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-78v5c" Nov 28 12:08:47 crc kubenswrapper[5030]: I1128 12:08:47.836942 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-78v5c" Nov 28 12:08:48 crc kubenswrapper[5030]: I1128 12:08:48.631247 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-78v5c" Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.057024 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tsppw" Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.057109 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tsppw" Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.134716 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tsppw" Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.273199 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/memcached-0"] Nov 28 12:08:49 crc kubenswrapper[5030]: E1128 12:08:49.273502 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6113ca86-03fd-4525-a4a1-fa7e2a5f9173" containerName="registry-server" Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.273521 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="6113ca86-03fd-4525-a4a1-fa7e2a5f9173" containerName="registry-server" Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.273651 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="6113ca86-03fd-4525-a4a1-fa7e2a5f9173" containerName="registry-server" Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.274094 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/memcached-0" Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.276365 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"memcached-config-data" Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.276638 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"memcached-memcached-dockercfg-n246p" Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.298703 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/memcached-0"] Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.355295 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64d47945-e54a-49e9-acfb-40b62274a05b-config-data\") pod \"memcached-0\" (UID: \"64d47945-e54a-49e9-acfb-40b62274a05b\") " pod="glance-kuttl-tests/memcached-0" Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.355362 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/64d47945-e54a-49e9-acfb-40b62274a05b-kolla-config\") pod \"memcached-0\" (UID: \"64d47945-e54a-49e9-acfb-40b62274a05b\") " pod="glance-kuttl-tests/memcached-0" Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.355782 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjn4x\" (UniqueName: \"kubernetes.io/projected/64d47945-e54a-49e9-acfb-40b62274a05b-kube-api-access-xjn4x\") pod \"memcached-0\" (UID: \"64d47945-e54a-49e9-acfb-40b62274a05b\") " pod="glance-kuttl-tests/memcached-0" Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.457592 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/64d47945-e54a-49e9-acfb-40b62274a05b-kolla-config\") pod \"memcached-0\" (UID: \"64d47945-e54a-49e9-acfb-40b62274a05b\") " pod="glance-kuttl-tests/memcached-0" Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.457713 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjn4x\" (UniqueName: \"kubernetes.io/projected/64d47945-e54a-49e9-acfb-40b62274a05b-kube-api-access-xjn4x\") pod \"memcached-0\" (UID: \"64d47945-e54a-49e9-acfb-40b62274a05b\") " pod="glance-kuttl-tests/memcached-0" Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.457749 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64d47945-e54a-49e9-acfb-40b62274a05b-config-data\") pod \"memcached-0\" (UID: \"64d47945-e54a-49e9-acfb-40b62274a05b\") " pod="glance-kuttl-tests/memcached-0" Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.458913 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64d47945-e54a-49e9-acfb-40b62274a05b-config-data\") pod \"memcached-0\" (UID: \"64d47945-e54a-49e9-acfb-40b62274a05b\") " pod="glance-kuttl-tests/memcached-0" Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.458920 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/64d47945-e54a-49e9-acfb-40b62274a05b-kolla-config\") pod \"memcached-0\" (UID: \"64d47945-e54a-49e9-acfb-40b62274a05b\") " pod="glance-kuttl-tests/memcached-0" Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.482828 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjn4x\" (UniqueName: \"kubernetes.io/projected/64d47945-e54a-49e9-acfb-40b62274a05b-kube-api-access-xjn4x\") pod \"memcached-0\" (UID: \"64d47945-e54a-49e9-acfb-40b62274a05b\") " pod="glance-kuttl-tests/memcached-0" Nov 28 12:08:49 crc kubenswrapper[5030]: I1128 12:08:49.593822 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/memcached-0" Nov 28 12:08:50 crc kubenswrapper[5030]: I1128 12:08:50.078824 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/memcached-0"] Nov 28 12:08:50 crc kubenswrapper[5030]: I1128 12:08:50.600625 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/memcached-0" event={"ID":"64d47945-e54a-49e9-acfb-40b62274a05b","Type":"ContainerStarted","Data":"1007efdfbdf37788d3886a2f08509a5e527cd7d060e818b5eed132099039887e"} Nov 28 12:08:51 crc kubenswrapper[5030]: I1128 12:08:51.317566 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh"] Nov 28 12:08:51 crc kubenswrapper[5030]: I1128 12:08:51.319590 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh" Nov 28 12:08:51 crc kubenswrapper[5030]: I1128 12:08:51.322125 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-br5mr" Nov 28 12:08:51 crc kubenswrapper[5030]: I1128 12:08:51.334282 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh"] Nov 28 12:08:51 crc kubenswrapper[5030]: I1128 12:08:51.390115 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72bkz\" (UniqueName: \"kubernetes.io/projected/ebc31616-3bb5-4c70-a664-7bbe8152ff83-kube-api-access-72bkz\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh\" (UID: \"ebc31616-3bb5-4c70-a664-7bbe8152ff83\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh" Nov 28 12:08:51 crc kubenswrapper[5030]: I1128 12:08:51.390178 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ebc31616-3bb5-4c70-a664-7bbe8152ff83-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh\" (UID: \"ebc31616-3bb5-4c70-a664-7bbe8152ff83\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh" Nov 28 12:08:51 crc kubenswrapper[5030]: I1128 12:08:51.390219 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ebc31616-3bb5-4c70-a664-7bbe8152ff83-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh\" (UID: \"ebc31616-3bb5-4c70-a664-7bbe8152ff83\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh" Nov 28 12:08:51 crc kubenswrapper[5030]: I1128 12:08:51.491840 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72bkz\" (UniqueName: \"kubernetes.io/projected/ebc31616-3bb5-4c70-a664-7bbe8152ff83-kube-api-access-72bkz\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh\" (UID: \"ebc31616-3bb5-4c70-a664-7bbe8152ff83\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh" Nov 28 12:08:51 crc kubenswrapper[5030]: I1128 12:08:51.491918 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ebc31616-3bb5-4c70-a664-7bbe8152ff83-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh\" (UID: \"ebc31616-3bb5-4c70-a664-7bbe8152ff83\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh" Nov 28 12:08:51 crc kubenswrapper[5030]: I1128 12:08:51.491960 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ebc31616-3bb5-4c70-a664-7bbe8152ff83-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh\" (UID: \"ebc31616-3bb5-4c70-a664-7bbe8152ff83\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh" Nov 28 12:08:51 crc kubenswrapper[5030]: I1128 12:08:51.493972 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ebc31616-3bb5-4c70-a664-7bbe8152ff83-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh\" (UID: \"ebc31616-3bb5-4c70-a664-7bbe8152ff83\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh" Nov 28 12:08:51 crc kubenswrapper[5030]: I1128 12:08:51.494347 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ebc31616-3bb5-4c70-a664-7bbe8152ff83-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh\" (UID: \"ebc31616-3bb5-4c70-a664-7bbe8152ff83\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh" Nov 28 12:08:51 crc kubenswrapper[5030]: I1128 12:08:51.517814 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72bkz\" (UniqueName: \"kubernetes.io/projected/ebc31616-3bb5-4c70-a664-7bbe8152ff83-kube-api-access-72bkz\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh\" (UID: \"ebc31616-3bb5-4c70-a664-7bbe8152ff83\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh" Nov 28 12:08:51 crc kubenswrapper[5030]: I1128 12:08:51.639312 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh" Nov 28 12:08:52 crc kubenswrapper[5030]: I1128 12:08:52.129731 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh"] Nov 28 12:08:52 crc kubenswrapper[5030]: W1128 12:08:52.140826 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebc31616_3bb5_4c70_a664_7bbe8152ff83.slice/crio-0feeb7d5e434b82a6c745abf87e917cc701a4c2fe397a8f7d6854455a15d41fd WatchSource:0}: Error finding container 0feeb7d5e434b82a6c745abf87e917cc701a4c2fe397a8f7d6854455a15d41fd: Status 404 returned error can't find the container with id 0feeb7d5e434b82a6c745abf87e917cc701a4c2fe397a8f7d6854455a15d41fd Nov 28 12:08:52 crc kubenswrapper[5030]: I1128 12:08:52.642733 5030 generic.go:334] "Generic (PLEG): container finished" podID="ebc31616-3bb5-4c70-a664-7bbe8152ff83" containerID="c289cfd8b1d5df57d4c81eba7222ead779b68f4316e2105917ca8d6c1a663aa2" exitCode=0 Nov 28 12:08:52 crc kubenswrapper[5030]: I1128 12:08:52.642805 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh" event={"ID":"ebc31616-3bb5-4c70-a664-7bbe8152ff83","Type":"ContainerDied","Data":"c289cfd8b1d5df57d4c81eba7222ead779b68f4316e2105917ca8d6c1a663aa2"} Nov 28 12:08:52 crc kubenswrapper[5030]: I1128 12:08:52.642849 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh" event={"ID":"ebc31616-3bb5-4c70-a664-7bbe8152ff83","Type":"ContainerStarted","Data":"0feeb7d5e434b82a6c745abf87e917cc701a4c2fe397a8f7d6854455a15d41fd"} Nov 28 12:08:53 crc kubenswrapper[5030]: I1128 12:08:53.554278 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:53 crc kubenswrapper[5030]: I1128 12:08:53.648953 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/openstack-galera-2" Nov 28 12:08:53 crc kubenswrapper[5030]: I1128 12:08:53.653112 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/memcached-0" event={"ID":"64d47945-e54a-49e9-acfb-40b62274a05b","Type":"ContainerStarted","Data":"7306342da949427907d9b529c9457d1a2450b47489bd98ed1d134a739ac91072"} Nov 28 12:08:53 crc kubenswrapper[5030]: I1128 12:08:53.653246 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/memcached-0" Nov 28 12:08:53 crc kubenswrapper[5030]: I1128 12:08:53.696971 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/memcached-0" podStartSLOduration=1.5340022869999999 podStartE2EDuration="4.696947767s" podCreationTimestamp="2025-11-28 12:08:49 +0000 UTC" firstStartedPulling="2025-11-28 12:08:50.092000516 +0000 UTC m=+948.033743239" lastFinishedPulling="2025-11-28 12:08:53.254946026 +0000 UTC m=+951.196688719" observedRunningTime="2025-11-28 12:08:53.691768377 +0000 UTC m=+951.633511070" watchObservedRunningTime="2025-11-28 12:08:53.696947767 +0000 UTC m=+951.638690450" Nov 28 12:08:54 crc kubenswrapper[5030]: I1128 12:08:54.664946 5030 generic.go:334] "Generic (PLEG): container finished" podID="ebc31616-3bb5-4c70-a664-7bbe8152ff83" containerID="810b048237c2287cf7808b3f2d9a3e2c9b9a0b8824b14a7713bf435146a0808e" exitCode=0 Nov 28 12:08:54 crc kubenswrapper[5030]: I1128 12:08:54.665047 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh" event={"ID":"ebc31616-3bb5-4c70-a664-7bbe8152ff83","Type":"ContainerDied","Data":"810b048237c2287cf7808b3f2d9a3e2c9b9a0b8824b14a7713bf435146a0808e"} Nov 28 12:08:55 crc kubenswrapper[5030]: I1128 12:08:55.676524 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh" event={"ID":"ebc31616-3bb5-4c70-a664-7bbe8152ff83","Type":"ContainerDied","Data":"1a6fa7d1033f80eac238d50b17723b4dca3354702d22f8e5052d8ef740ea0dca"} Nov 28 12:08:55 crc kubenswrapper[5030]: I1128 12:08:55.676450 5030 generic.go:334] "Generic (PLEG): container finished" podID="ebc31616-3bb5-4c70-a664-7bbe8152ff83" containerID="1a6fa7d1033f80eac238d50b17723b4dca3354702d22f8e5052d8ef740ea0dca" exitCode=0 Nov 28 12:08:55 crc kubenswrapper[5030]: E1128 12:08:55.940254 5030 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.110:39488->38.102.83.110:45115: write tcp 38.102.83.110:39488->38.102.83.110:45115: write: broken pipe Nov 28 12:08:57 crc kubenswrapper[5030]: I1128 12:08:57.059577 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh" Nov 28 12:08:57 crc kubenswrapper[5030]: I1128 12:08:57.184663 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ebc31616-3bb5-4c70-a664-7bbe8152ff83-bundle\") pod \"ebc31616-3bb5-4c70-a664-7bbe8152ff83\" (UID: \"ebc31616-3bb5-4c70-a664-7bbe8152ff83\") " Nov 28 12:08:57 crc kubenswrapper[5030]: I1128 12:08:57.184921 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72bkz\" (UniqueName: \"kubernetes.io/projected/ebc31616-3bb5-4c70-a664-7bbe8152ff83-kube-api-access-72bkz\") pod \"ebc31616-3bb5-4c70-a664-7bbe8152ff83\" (UID: \"ebc31616-3bb5-4c70-a664-7bbe8152ff83\") " Nov 28 12:08:57 crc kubenswrapper[5030]: I1128 12:08:57.185011 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ebc31616-3bb5-4c70-a664-7bbe8152ff83-util\") pod \"ebc31616-3bb5-4c70-a664-7bbe8152ff83\" (UID: \"ebc31616-3bb5-4c70-a664-7bbe8152ff83\") " Nov 28 12:08:57 crc kubenswrapper[5030]: I1128 12:08:57.186051 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebc31616-3bb5-4c70-a664-7bbe8152ff83-bundle" (OuterVolumeSpecName: "bundle") pod "ebc31616-3bb5-4c70-a664-7bbe8152ff83" (UID: "ebc31616-3bb5-4c70-a664-7bbe8152ff83"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:08:57 crc kubenswrapper[5030]: I1128 12:08:57.202965 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebc31616-3bb5-4c70-a664-7bbe8152ff83-kube-api-access-72bkz" (OuterVolumeSpecName: "kube-api-access-72bkz") pod "ebc31616-3bb5-4c70-a664-7bbe8152ff83" (UID: "ebc31616-3bb5-4c70-a664-7bbe8152ff83"). InnerVolumeSpecName "kube-api-access-72bkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:08:57 crc kubenswrapper[5030]: I1128 12:08:57.218889 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebc31616-3bb5-4c70-a664-7bbe8152ff83-util" (OuterVolumeSpecName: "util") pod "ebc31616-3bb5-4c70-a664-7bbe8152ff83" (UID: "ebc31616-3bb5-4c70-a664-7bbe8152ff83"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:08:57 crc kubenswrapper[5030]: I1128 12:08:57.287578 5030 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ebc31616-3bb5-4c70-a664-7bbe8152ff83-util\") on node \"crc\" DevicePath \"\"" Nov 28 12:08:57 crc kubenswrapper[5030]: I1128 12:08:57.287690 5030 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ebc31616-3bb5-4c70-a664-7bbe8152ff83-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:08:57 crc kubenswrapper[5030]: I1128 12:08:57.287716 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72bkz\" (UniqueName: \"kubernetes.io/projected/ebc31616-3bb5-4c70-a664-7bbe8152ff83-kube-api-access-72bkz\") on node \"crc\" DevicePath \"\"" Nov 28 12:08:57 crc kubenswrapper[5030]: I1128 12:08:57.695645 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh" event={"ID":"ebc31616-3bb5-4c70-a664-7bbe8152ff83","Type":"ContainerDied","Data":"0feeb7d5e434b82a6c745abf87e917cc701a4c2fe397a8f7d6854455a15d41fd"} Nov 28 12:08:57 crc kubenswrapper[5030]: I1128 12:08:57.695699 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0feeb7d5e434b82a6c745abf87e917cc701a4c2fe397a8f7d6854455a15d41fd" Nov 28 12:08:57 crc kubenswrapper[5030]: I1128 12:08:57.695777 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh" Nov 28 12:08:59 crc kubenswrapper[5030]: E1128 12:08:59.123400 5030 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.110:56616->38.102.83.110:45115: write tcp 192.168.126.11:10250->192.168.126.11:47154: write: broken pipe Nov 28 12:08:59 crc kubenswrapper[5030]: I1128 12:08:59.125231 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tsppw" Nov 28 12:08:59 crc kubenswrapper[5030]: I1128 12:08:59.266344 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-65mhk"] Nov 28 12:08:59 crc kubenswrapper[5030]: E1128 12:08:59.266662 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc31616-3bb5-4c70-a664-7bbe8152ff83" containerName="extract" Nov 28 12:08:59 crc kubenswrapper[5030]: I1128 12:08:59.266679 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc31616-3bb5-4c70-a664-7bbe8152ff83" containerName="extract" Nov 28 12:08:59 crc kubenswrapper[5030]: E1128 12:08:59.266694 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc31616-3bb5-4c70-a664-7bbe8152ff83" containerName="pull" Nov 28 12:08:59 crc kubenswrapper[5030]: I1128 12:08:59.266700 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc31616-3bb5-4c70-a664-7bbe8152ff83" containerName="pull" Nov 28 12:08:59 crc kubenswrapper[5030]: E1128 12:08:59.266715 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc31616-3bb5-4c70-a664-7bbe8152ff83" containerName="util" Nov 28 12:08:59 crc kubenswrapper[5030]: I1128 12:08:59.266723 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc31616-3bb5-4c70-a664-7bbe8152ff83" containerName="util" Nov 28 12:08:59 crc kubenswrapper[5030]: I1128 12:08:59.266850 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebc31616-3bb5-4c70-a664-7bbe8152ff83" containerName="extract" Nov 28 12:08:59 crc kubenswrapper[5030]: I1128 12:08:59.267855 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65mhk" Nov 28 12:08:59 crc kubenswrapper[5030]: I1128 12:08:59.291060 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65mhk"] Nov 28 12:08:59 crc kubenswrapper[5030]: I1128 12:08:59.326066 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c576c5-1a1a-4dd0-9370-12c58da88047-utilities\") pod \"certified-operators-65mhk\" (UID: \"f9c576c5-1a1a-4dd0-9370-12c58da88047\") " pod="openshift-marketplace/certified-operators-65mhk" Nov 28 12:08:59 crc kubenswrapper[5030]: I1128 12:08:59.326156 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c576c5-1a1a-4dd0-9370-12c58da88047-catalog-content\") pod \"certified-operators-65mhk\" (UID: \"f9c576c5-1a1a-4dd0-9370-12c58da88047\") " pod="openshift-marketplace/certified-operators-65mhk" Nov 28 12:08:59 crc kubenswrapper[5030]: I1128 12:08:59.326276 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4wvd\" (UniqueName: \"kubernetes.io/projected/f9c576c5-1a1a-4dd0-9370-12c58da88047-kube-api-access-g4wvd\") pod \"certified-operators-65mhk\" (UID: \"f9c576c5-1a1a-4dd0-9370-12c58da88047\") " pod="openshift-marketplace/certified-operators-65mhk" Nov 28 12:08:59 crc kubenswrapper[5030]: I1128 12:08:59.428229 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4wvd\" (UniqueName: \"kubernetes.io/projected/f9c576c5-1a1a-4dd0-9370-12c58da88047-kube-api-access-g4wvd\") pod \"certified-operators-65mhk\" (UID: \"f9c576c5-1a1a-4dd0-9370-12c58da88047\") " pod="openshift-marketplace/certified-operators-65mhk" Nov 28 12:08:59 crc kubenswrapper[5030]: I1128 12:08:59.428327 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c576c5-1a1a-4dd0-9370-12c58da88047-utilities\") pod \"certified-operators-65mhk\" (UID: \"f9c576c5-1a1a-4dd0-9370-12c58da88047\") " pod="openshift-marketplace/certified-operators-65mhk" Nov 28 12:08:59 crc kubenswrapper[5030]: I1128 12:08:59.428366 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c576c5-1a1a-4dd0-9370-12c58da88047-catalog-content\") pod \"certified-operators-65mhk\" (UID: \"f9c576c5-1a1a-4dd0-9370-12c58da88047\") " pod="openshift-marketplace/certified-operators-65mhk" Nov 28 12:08:59 crc kubenswrapper[5030]: I1128 12:08:59.428876 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c576c5-1a1a-4dd0-9370-12c58da88047-utilities\") pod \"certified-operators-65mhk\" (UID: \"f9c576c5-1a1a-4dd0-9370-12c58da88047\") " pod="openshift-marketplace/certified-operators-65mhk" Nov 28 12:08:59 crc kubenswrapper[5030]: I1128 12:08:59.428905 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c576c5-1a1a-4dd0-9370-12c58da88047-catalog-content\") pod \"certified-operators-65mhk\" (UID: \"f9c576c5-1a1a-4dd0-9370-12c58da88047\") " pod="openshift-marketplace/certified-operators-65mhk" Nov 28 12:08:59 crc kubenswrapper[5030]: I1128 12:08:59.460366 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4wvd\" (UniqueName: \"kubernetes.io/projected/f9c576c5-1a1a-4dd0-9370-12c58da88047-kube-api-access-g4wvd\") pod \"certified-operators-65mhk\" (UID: \"f9c576c5-1a1a-4dd0-9370-12c58da88047\") " pod="openshift-marketplace/certified-operators-65mhk" Nov 28 12:08:59 crc kubenswrapper[5030]: I1128 12:08:59.595762 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/memcached-0" Nov 28 12:08:59 crc kubenswrapper[5030]: I1128 12:08:59.640966 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65mhk" Nov 28 12:09:00 crc kubenswrapper[5030]: I1128 12:09:00.116774 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65mhk"] Nov 28 12:09:00 crc kubenswrapper[5030]: I1128 12:09:00.732949 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65mhk" event={"ID":"f9c576c5-1a1a-4dd0-9370-12c58da88047","Type":"ContainerStarted","Data":"8210781231bbf2dfd383f213cc2867a395d61c38396ab1160203052713aeb8fe"} Nov 28 12:09:01 crc kubenswrapper[5030]: I1128 12:09:01.253338 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tsppw"] Nov 28 12:09:01 crc kubenswrapper[5030]: I1128 12:09:01.253749 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tsppw" podUID="e0e0ea75-97b8-489e-81ec-a74cf4f0daa9" containerName="registry-server" containerID="cri-o://b089be948d4f8431f45411215b352805408fd154144358b3d1d1ff1cb3f37291" gracePeriod=2 Nov 28 12:09:03 crc kubenswrapper[5030]: I1128 12:09:03.202405 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:09:03 crc kubenswrapper[5030]: I1128 12:09:03.202923 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:09:03 crc kubenswrapper[5030]: I1128 12:09:03.202992 5030 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 12:09:03 crc kubenswrapper[5030]: I1128 12:09:03.203882 5030 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"440c69d6f2693ab24ec11da83e2b2b49568d8223dcdef3effa26def3f51975e3"} pod="openshift-machine-config-operator/machine-config-daemon-cqr62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 12:09:03 crc kubenswrapper[5030]: I1128 12:09:03.203943 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" containerID="cri-o://440c69d6f2693ab24ec11da83e2b2b49568d8223dcdef3effa26def3f51975e3" gracePeriod=600 Nov 28 12:09:03 crc kubenswrapper[5030]: I1128 12:09:03.770286 5030 generic.go:334] "Generic (PLEG): container finished" podID="e0e0ea75-97b8-489e-81ec-a74cf4f0daa9" containerID="b089be948d4f8431f45411215b352805408fd154144358b3d1d1ff1cb3f37291" exitCode=0 Nov 28 12:09:03 crc kubenswrapper[5030]: I1128 12:09:03.770450 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsppw" event={"ID":"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9","Type":"ContainerDied","Data":"b089be948d4f8431f45411215b352805408fd154144358b3d1d1ff1cb3f37291"} Nov 28 12:09:03 crc kubenswrapper[5030]: I1128 12:09:03.774811 5030 generic.go:334] "Generic (PLEG): container finished" podID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerID="440c69d6f2693ab24ec11da83e2b2b49568d8223dcdef3effa26def3f51975e3" exitCode=0 Nov 28 12:09:03 crc kubenswrapper[5030]: I1128 12:09:03.774900 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerDied","Data":"440c69d6f2693ab24ec11da83e2b2b49568d8223dcdef3effa26def3f51975e3"} Nov 28 12:09:03 crc kubenswrapper[5030]: I1128 12:09:03.774966 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerStarted","Data":"2b5a0df1bdf326961f0bfd95e325cb1bcebbae770d53c82e197938a5584c8725"} Nov 28 12:09:03 crc kubenswrapper[5030]: I1128 12:09:03.774993 5030 scope.go:117] "RemoveContainer" containerID="1d6b24c1331357c81e9c3721fca85bfc8df7a48f3286c0b8748f4a82dbcaa4eb" Nov 28 12:09:03 crc kubenswrapper[5030]: I1128 12:09:03.778293 5030 generic.go:334] "Generic (PLEG): container finished" podID="f9c576c5-1a1a-4dd0-9370-12c58da88047" containerID="4df2ea97fa2159e2e309f0ff05d0301a66a712228c8662f67d26e2fff24094d0" exitCode=0 Nov 28 12:09:03 crc kubenswrapper[5030]: I1128 12:09:03.778329 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65mhk" event={"ID":"f9c576c5-1a1a-4dd0-9370-12c58da88047","Type":"ContainerDied","Data":"4df2ea97fa2159e2e309f0ff05d0301a66a712228c8662f67d26e2fff24094d0"} Nov 28 12:09:04 crc kubenswrapper[5030]: I1128 12:09:04.399217 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsppw" Nov 28 12:09:04 crc kubenswrapper[5030]: I1128 12:09:04.511035 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0e0ea75-97b8-489e-81ec-a74cf4f0daa9-utilities\") pod \"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9\" (UID: \"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9\") " Nov 28 12:09:04 crc kubenswrapper[5030]: I1128 12:09:04.511660 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg84h\" (UniqueName: \"kubernetes.io/projected/e0e0ea75-97b8-489e-81ec-a74cf4f0daa9-kube-api-access-gg84h\") pod \"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9\" (UID: \"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9\") " Nov 28 12:09:04 crc kubenswrapper[5030]: I1128 12:09:04.511787 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0e0ea75-97b8-489e-81ec-a74cf4f0daa9-catalog-content\") pod \"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9\" (UID: \"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9\") " Nov 28 12:09:04 crc kubenswrapper[5030]: I1128 12:09:04.513196 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0e0ea75-97b8-489e-81ec-a74cf4f0daa9-utilities" (OuterVolumeSpecName: "utilities") pod "e0e0ea75-97b8-489e-81ec-a74cf4f0daa9" (UID: "e0e0ea75-97b8-489e-81ec-a74cf4f0daa9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:09:04 crc kubenswrapper[5030]: I1128 12:09:04.520062 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0e0ea75-97b8-489e-81ec-a74cf4f0daa9-kube-api-access-gg84h" (OuterVolumeSpecName: "kube-api-access-gg84h") pod "e0e0ea75-97b8-489e-81ec-a74cf4f0daa9" (UID: "e0e0ea75-97b8-489e-81ec-a74cf4f0daa9"). InnerVolumeSpecName "kube-api-access-gg84h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:09:04 crc kubenswrapper[5030]: I1128 12:09:04.560337 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0e0ea75-97b8-489e-81ec-a74cf4f0daa9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0e0ea75-97b8-489e-81ec-a74cf4f0daa9" (UID: "e0e0ea75-97b8-489e-81ec-a74cf4f0daa9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:09:04 crc kubenswrapper[5030]: I1128 12:09:04.613446 5030 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0e0ea75-97b8-489e-81ec-a74cf4f0daa9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:09:04 crc kubenswrapper[5030]: I1128 12:09:04.613504 5030 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0e0ea75-97b8-489e-81ec-a74cf4f0daa9-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:09:04 crc kubenswrapper[5030]: I1128 12:09:04.613518 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg84h\" (UniqueName: \"kubernetes.io/projected/e0e0ea75-97b8-489e-81ec-a74cf4f0daa9-kube-api-access-gg84h\") on node \"crc\" DevicePath \"\"" Nov 28 12:09:04 crc kubenswrapper[5030]: I1128 12:09:04.793251 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tsppw" event={"ID":"e0e0ea75-97b8-489e-81ec-a74cf4f0daa9","Type":"ContainerDied","Data":"21eae9a944cfc59e5074fc756a211616d785c6b97115b53c7b4a6ad3fcadd063"} Nov 28 12:09:04 crc kubenswrapper[5030]: I1128 12:09:04.793318 5030 scope.go:117] "RemoveContainer" containerID="b089be948d4f8431f45411215b352805408fd154144358b3d1d1ff1cb3f37291" Nov 28 12:09:04 crc kubenswrapper[5030]: I1128 12:09:04.793377 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tsppw" Nov 28 12:09:04 crc kubenswrapper[5030]: I1128 12:09:04.829366 5030 scope.go:117] "RemoveContainer" containerID="c59570754dcaf2adc23f7ab4eb806a82b3b05cc29229d2adb49efd43b92617cb" Nov 28 12:09:04 crc kubenswrapper[5030]: I1128 12:09:04.833020 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tsppw"] Nov 28 12:09:04 crc kubenswrapper[5030]: I1128 12:09:04.857148 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tsppw"] Nov 28 12:09:04 crc kubenswrapper[5030]: I1128 12:09:04.865760 5030 scope.go:117] "RemoveContainer" containerID="43fe79284e6d131703297ae8d63906109e3cd2dd33384182c4b0a4248d6314e2" Nov 28 12:09:05 crc kubenswrapper[5030]: I1128 12:09:05.802213 5030 generic.go:334] "Generic (PLEG): container finished" podID="f9c576c5-1a1a-4dd0-9370-12c58da88047" containerID="4f78a1918de137dbe8143bef440a41357269dafd771b1c6388570c56bce1a397" exitCode=0 Nov 28 12:09:05 crc kubenswrapper[5030]: I1128 12:09:05.802578 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65mhk" event={"ID":"f9c576c5-1a1a-4dd0-9370-12c58da88047","Type":"ContainerDied","Data":"4f78a1918de137dbe8143bef440a41357269dafd771b1c6388570c56bce1a397"} Nov 28 12:09:06 crc kubenswrapper[5030]: I1128 12:09:06.062235 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5c6km"] Nov 28 12:09:06 crc kubenswrapper[5030]: E1128 12:09:06.062699 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e0ea75-97b8-489e-81ec-a74cf4f0daa9" containerName="extract-utilities" Nov 28 12:09:06 crc kubenswrapper[5030]: I1128 12:09:06.062717 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e0ea75-97b8-489e-81ec-a74cf4f0daa9" containerName="extract-utilities" Nov 28 12:09:06 crc kubenswrapper[5030]: E1128 12:09:06.062738 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e0ea75-97b8-489e-81ec-a74cf4f0daa9" containerName="extract-content" Nov 28 12:09:06 crc kubenswrapper[5030]: I1128 12:09:06.062746 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e0ea75-97b8-489e-81ec-a74cf4f0daa9" containerName="extract-content" Nov 28 12:09:06 crc kubenswrapper[5030]: E1128 12:09:06.062762 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e0ea75-97b8-489e-81ec-a74cf4f0daa9" containerName="registry-server" Nov 28 12:09:06 crc kubenswrapper[5030]: I1128 12:09:06.062771 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e0ea75-97b8-489e-81ec-a74cf4f0daa9" containerName="registry-server" Nov 28 12:09:06 crc kubenswrapper[5030]: I1128 12:09:06.063444 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0e0ea75-97b8-489e-81ec-a74cf4f0daa9" containerName="registry-server" Nov 28 12:09:06 crc kubenswrapper[5030]: I1128 12:09:06.064820 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5c6km" Nov 28 12:09:06 crc kubenswrapper[5030]: I1128 12:09:06.072183 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5c6km"] Nov 28 12:09:06 crc kubenswrapper[5030]: I1128 12:09:06.136377 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a3e1f8-3b2f-44f5-b481-938543e524e4-catalog-content\") pod \"redhat-marketplace-5c6km\" (UID: \"c5a3e1f8-3b2f-44f5-b481-938543e524e4\") " pod="openshift-marketplace/redhat-marketplace-5c6km" Nov 28 12:09:06 crc kubenswrapper[5030]: I1128 12:09:06.136504 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5mkp\" (UniqueName: \"kubernetes.io/projected/c5a3e1f8-3b2f-44f5-b481-938543e524e4-kube-api-access-t5mkp\") pod \"redhat-marketplace-5c6km\" (UID: \"c5a3e1f8-3b2f-44f5-b481-938543e524e4\") " pod="openshift-marketplace/redhat-marketplace-5c6km" Nov 28 12:09:06 crc kubenswrapper[5030]: I1128 12:09:06.136576 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a3e1f8-3b2f-44f5-b481-938543e524e4-utilities\") pod \"redhat-marketplace-5c6km\" (UID: \"c5a3e1f8-3b2f-44f5-b481-938543e524e4\") " pod="openshift-marketplace/redhat-marketplace-5c6km" Nov 28 12:09:06 crc kubenswrapper[5030]: I1128 12:09:06.238053 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a3e1f8-3b2f-44f5-b481-938543e524e4-utilities\") pod \"redhat-marketplace-5c6km\" (UID: \"c5a3e1f8-3b2f-44f5-b481-938543e524e4\") " pod="openshift-marketplace/redhat-marketplace-5c6km" Nov 28 12:09:06 crc kubenswrapper[5030]: I1128 12:09:06.238125 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a3e1f8-3b2f-44f5-b481-938543e524e4-catalog-content\") pod \"redhat-marketplace-5c6km\" (UID: \"c5a3e1f8-3b2f-44f5-b481-938543e524e4\") " pod="openshift-marketplace/redhat-marketplace-5c6km" Nov 28 12:09:06 crc kubenswrapper[5030]: I1128 12:09:06.238179 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5mkp\" (UniqueName: \"kubernetes.io/projected/c5a3e1f8-3b2f-44f5-b481-938543e524e4-kube-api-access-t5mkp\") pod \"redhat-marketplace-5c6km\" (UID: \"c5a3e1f8-3b2f-44f5-b481-938543e524e4\") " pod="openshift-marketplace/redhat-marketplace-5c6km" Nov 28 12:09:06 crc kubenswrapper[5030]: I1128 12:09:06.238858 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a3e1f8-3b2f-44f5-b481-938543e524e4-utilities\") pod \"redhat-marketplace-5c6km\" (UID: \"c5a3e1f8-3b2f-44f5-b481-938543e524e4\") " pod="openshift-marketplace/redhat-marketplace-5c6km" Nov 28 12:09:06 crc kubenswrapper[5030]: I1128 12:09:06.238923 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a3e1f8-3b2f-44f5-b481-938543e524e4-catalog-content\") pod \"redhat-marketplace-5c6km\" (UID: \"c5a3e1f8-3b2f-44f5-b481-938543e524e4\") " pod="openshift-marketplace/redhat-marketplace-5c6km" Nov 28 12:09:06 crc kubenswrapper[5030]: I1128 12:09:06.259350 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5mkp\" (UniqueName: \"kubernetes.io/projected/c5a3e1f8-3b2f-44f5-b481-938543e524e4-kube-api-access-t5mkp\") pod \"redhat-marketplace-5c6km\" (UID: \"c5a3e1f8-3b2f-44f5-b481-938543e524e4\") " pod="openshift-marketplace/redhat-marketplace-5c6km" Nov 28 12:09:06 crc kubenswrapper[5030]: I1128 12:09:06.385148 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5c6km" Nov 28 12:09:06 crc kubenswrapper[5030]: I1128 12:09:06.428517 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0e0ea75-97b8-489e-81ec-a74cf4f0daa9" path="/var/lib/kubelet/pods/e0e0ea75-97b8-489e-81ec-a74cf4f0daa9/volumes" Nov 28 12:09:06 crc kubenswrapper[5030]: I1128 12:09:06.852438 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5c6km"] Nov 28 12:09:07 crc kubenswrapper[5030]: I1128 12:09:07.498686 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/openstack-galera-2" podUID="58f32b69-3330-4888-85e8-b3e0b0eed50c" containerName="galera" probeResult="failure" output=< Nov 28 12:09:07 crc kubenswrapper[5030]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Nov 28 12:09:07 crc kubenswrapper[5030]: > Nov 28 12:09:07 crc kubenswrapper[5030]: I1128 12:09:07.595691 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-s9jkg"] Nov 28 12:09:07 crc kubenswrapper[5030]: I1128 12:09:07.596570 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-s9jkg" Nov 28 12:09:07 crc kubenswrapper[5030]: I1128 12:09:07.599123 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-zszdj" Nov 28 12:09:07 crc kubenswrapper[5030]: I1128 12:09:07.614885 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-s9jkg"] Nov 28 12:09:07 crc kubenswrapper[5030]: I1128 12:09:07.664663 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwnp8\" (UniqueName: \"kubernetes.io/projected/b55b7cc9-5974-46a7-b685-252d63a2ada3-kube-api-access-hwnp8\") pod \"rabbitmq-cluster-operator-779fc9694b-s9jkg\" (UID: \"b55b7cc9-5974-46a7-b685-252d63a2ada3\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-s9jkg" Nov 28 12:09:07 crc kubenswrapper[5030]: I1128 12:09:07.766550 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwnp8\" (UniqueName: \"kubernetes.io/projected/b55b7cc9-5974-46a7-b685-252d63a2ada3-kube-api-access-hwnp8\") pod \"rabbitmq-cluster-operator-779fc9694b-s9jkg\" (UID: \"b55b7cc9-5974-46a7-b685-252d63a2ada3\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-s9jkg" Nov 28 12:09:07 crc kubenswrapper[5030]: I1128 12:09:07.788393 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwnp8\" (UniqueName: \"kubernetes.io/projected/b55b7cc9-5974-46a7-b685-252d63a2ada3-kube-api-access-hwnp8\") pod \"rabbitmq-cluster-operator-779fc9694b-s9jkg\" (UID: \"b55b7cc9-5974-46a7-b685-252d63a2ada3\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-s9jkg" Nov 28 12:09:07 crc kubenswrapper[5030]: I1128 12:09:07.820891 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5c6km" event={"ID":"c5a3e1f8-3b2f-44f5-b481-938543e524e4","Type":"ContainerStarted","Data":"04cdb725cbbb0d855e143733e026d5d8dd3826273f6b97f2fb13a1284948cc5d"} Nov 28 12:09:07 crc kubenswrapper[5030]: I1128 12:09:07.925079 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-s9jkg" Nov 28 12:09:08 crc kubenswrapper[5030]: I1128 12:09:08.220703 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:09:08 crc kubenswrapper[5030]: I1128 12:09:08.335823 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/openstack-galera-1" Nov 28 12:09:08 crc kubenswrapper[5030]: I1128 12:09:08.409641 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-s9jkg"] Nov 28 12:09:08 crc kubenswrapper[5030]: I1128 12:09:08.830399 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65mhk" event={"ID":"f9c576c5-1a1a-4dd0-9370-12c58da88047","Type":"ContainerStarted","Data":"5cc971f78497ff506a614e6cb72606ff06bb7017bd8d3d4ecdf6f08052dd38a7"} Nov 28 12:09:08 crc kubenswrapper[5030]: I1128 12:09:08.832404 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-s9jkg" event={"ID":"b55b7cc9-5974-46a7-b685-252d63a2ada3","Type":"ContainerStarted","Data":"8c077d31f763f322ae9b8bb2dc452f273aa30275e26f1a62ed7c89aca3d921ea"} Nov 28 12:09:08 crc kubenswrapper[5030]: I1128 12:09:08.834121 5030 generic.go:334] "Generic (PLEG): container finished" podID="c5a3e1f8-3b2f-44f5-b481-938543e524e4" containerID="093f2bd0b26bfe9a21f340da51ccd52ebdb46d720e81fac8cb6116fa20709ba8" exitCode=0 Nov 28 12:09:08 crc kubenswrapper[5030]: I1128 12:09:08.834184 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5c6km" event={"ID":"c5a3e1f8-3b2f-44f5-b481-938543e524e4","Type":"ContainerDied","Data":"093f2bd0b26bfe9a21f340da51ccd52ebdb46d720e81fac8cb6116fa20709ba8"} Nov 28 12:09:08 crc kubenswrapper[5030]: I1128 12:09:08.867554 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-65mhk" podStartSLOduration=6.82473629 podStartE2EDuration="9.867527703s" podCreationTimestamp="2025-11-28 12:08:59 +0000 UTC" firstStartedPulling="2025-11-28 12:09:03.782801076 +0000 UTC m=+961.724543759" lastFinishedPulling="2025-11-28 12:09:06.825592489 +0000 UTC m=+964.767335172" observedRunningTime="2025-11-28 12:09:08.863415932 +0000 UTC m=+966.805158655" watchObservedRunningTime="2025-11-28 12:09:08.867527703 +0000 UTC m=+966.809270386" Nov 28 12:09:09 crc kubenswrapper[5030]: I1128 12:09:09.641932 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-65mhk" Nov 28 12:09:09 crc kubenswrapper[5030]: I1128 12:09:09.642004 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-65mhk" Nov 28 12:09:09 crc kubenswrapper[5030]: I1128 12:09:09.696050 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-65mhk" Nov 28 12:09:09 crc kubenswrapper[5030]: I1128 12:09:09.846528 5030 generic.go:334] "Generic (PLEG): container finished" podID="c5a3e1f8-3b2f-44f5-b481-938543e524e4" containerID="5209d2474ea63be13c27c158a925c404d290ad6c43758bb671e2447071a544ab" exitCode=0 Nov 28 12:09:09 crc kubenswrapper[5030]: I1128 12:09:09.846608 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5c6km" event={"ID":"c5a3e1f8-3b2f-44f5-b481-938543e524e4","Type":"ContainerDied","Data":"5209d2474ea63be13c27c158a925c404d290ad6c43758bb671e2447071a544ab"} Nov 28 12:09:12 crc kubenswrapper[5030]: I1128 12:09:12.874537 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5c6km" event={"ID":"c5a3e1f8-3b2f-44f5-b481-938543e524e4","Type":"ContainerStarted","Data":"dce468b4a050e0ea5997f2690e259e9e954580da463f3a80d2a5de3dd355b7d9"} Nov 28 12:09:12 crc kubenswrapper[5030]: I1128 12:09:12.878658 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-s9jkg" event={"ID":"b55b7cc9-5974-46a7-b685-252d63a2ada3","Type":"ContainerStarted","Data":"be861a1bd3cc7b947aa6e05e887e68f48d5202c6510face46d917429daf17254"} Nov 28 12:09:12 crc kubenswrapper[5030]: I1128 12:09:12.909586 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5c6km" podStartSLOduration=3.730714293 podStartE2EDuration="6.909569432s" podCreationTimestamp="2025-11-28 12:09:06 +0000 UTC" firstStartedPulling="2025-11-28 12:09:08.836559417 +0000 UTC m=+966.778302100" lastFinishedPulling="2025-11-28 12:09:12.015414556 +0000 UTC m=+969.957157239" observedRunningTime="2025-11-28 12:09:12.904135205 +0000 UTC m=+970.845877888" watchObservedRunningTime="2025-11-28 12:09:12.909569432 +0000 UTC m=+970.851312115" Nov 28 12:09:12 crc kubenswrapper[5030]: I1128 12:09:12.929356 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-s9jkg" podStartSLOduration=2.32678882 podStartE2EDuration="5.929335456s" podCreationTimestamp="2025-11-28 12:09:07 +0000 UTC" firstStartedPulling="2025-11-28 12:09:08.430091616 +0000 UTC m=+966.371834299" lastFinishedPulling="2025-11-28 12:09:12.032638212 +0000 UTC m=+969.974380935" observedRunningTime="2025-11-28 12:09:12.927511577 +0000 UTC m=+970.869254260" watchObservedRunningTime="2025-11-28 12:09:12.929335456 +0000 UTC m=+970.871078129" Nov 28 12:09:13 crc kubenswrapper[5030]: I1128 12:09:13.432531 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:09:13 crc kubenswrapper[5030]: I1128 12:09:13.647633 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/openstack-galera-0" Nov 28 12:09:16 crc kubenswrapper[5030]: I1128 12:09:16.386130 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5c6km" Nov 28 12:09:16 crc kubenswrapper[5030]: I1128 12:09:16.391528 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5c6km" Nov 28 12:09:16 crc kubenswrapper[5030]: I1128 12:09:16.469330 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5c6km" Nov 28 12:09:16 crc kubenswrapper[5030]: I1128 12:09:16.874990 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Nov 28 12:09:16 crc kubenswrapper[5030]: I1128 12:09:16.912747 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:16 crc kubenswrapper[5030]: I1128 12:09:16.921399 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"rabbitmq-erlang-cookie" Nov 28 12:09:16 crc kubenswrapper[5030]: I1128 12:09:16.921783 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"rabbitmq-plugins-conf" Nov 28 12:09:16 crc kubenswrapper[5030]: I1128 12:09:16.922082 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"rabbitmq-server-conf" Nov 28 12:09:16 crc kubenswrapper[5030]: I1128 12:09:16.922791 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"rabbitmq-default-user" Nov 28 12:09:16 crc kubenswrapper[5030]: I1128 12:09:16.932525 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"rabbitmq-server-dockercfg-2w72w" Nov 28 12:09:16 crc kubenswrapper[5030]: I1128 12:09:16.946088 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbbmj\" (UniqueName: \"kubernetes.io/projected/a569f835-2a0b-4752-8d4c-8a0c22524cfa-kube-api-access-sbbmj\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:16 crc kubenswrapper[5030]: I1128 12:09:16.946160 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a569f835-2a0b-4752-8d4c-8a0c22524cfa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:16 crc kubenswrapper[5030]: I1128 12:09:16.946212 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a569f835-2a0b-4752-8d4c-8a0c22524cfa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:16 crc kubenswrapper[5030]: I1128 12:09:16.946256 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a569f835-2a0b-4752-8d4c-8a0c22524cfa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:16 crc kubenswrapper[5030]: I1128 12:09:16.946296 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a569f835-2a0b-4752-8d4c-8a0c22524cfa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:16 crc kubenswrapper[5030]: I1128 12:09:16.946350 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5d12f2e5-559b-4736-bff0-e51a12cc1d4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d12f2e5-559b-4736-bff0-e51a12cc1d4e\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:16 crc kubenswrapper[5030]: I1128 12:09:16.946381 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a569f835-2a0b-4752-8d4c-8a0c22524cfa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:16 crc kubenswrapper[5030]: I1128 12:09:16.946428 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a569f835-2a0b-4752-8d4c-8a0c22524cfa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:16 crc kubenswrapper[5030]: I1128 12:09:16.980122 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.048266 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a569f835-2a0b-4752-8d4c-8a0c22524cfa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.048826 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a569f835-2a0b-4752-8d4c-8a0c22524cfa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.049006 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5d12f2e5-559b-4736-bff0-e51a12cc1d4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d12f2e5-559b-4736-bff0-e51a12cc1d4e\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.049137 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a569f835-2a0b-4752-8d4c-8a0c22524cfa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.049255 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a569f835-2a0b-4752-8d4c-8a0c22524cfa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.049408 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbbmj\" (UniqueName: \"kubernetes.io/projected/a569f835-2a0b-4752-8d4c-8a0c22524cfa-kube-api-access-sbbmj\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.049620 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a569f835-2a0b-4752-8d4c-8a0c22524cfa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.049740 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a569f835-2a0b-4752-8d4c-8a0c22524cfa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.050798 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a569f835-2a0b-4752-8d4c-8a0c22524cfa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.051347 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a569f835-2a0b-4752-8d4c-8a0c22524cfa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.052639 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a569f835-2a0b-4752-8d4c-8a0c22524cfa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.057651 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a569f835-2a0b-4752-8d4c-8a0c22524cfa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.058157 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a569f835-2a0b-4752-8d4c-8a0c22524cfa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.060118 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a569f835-2a0b-4752-8d4c-8a0c22524cfa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.070402 5030 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.070678 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5d12f2e5-559b-4736-bff0-e51a12cc1d4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d12f2e5-559b-4736-bff0-e51a12cc1d4e\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8dd49781a1ca350614f6fb0ff2d84cbf014cf494f13b04fd9086d32b2972ff6e/globalmount\"" pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.074570 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbbmj\" (UniqueName: \"kubernetes.io/projected/a569f835-2a0b-4752-8d4c-8a0c22524cfa-kube-api-access-sbbmj\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.142731 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5d12f2e5-559b-4736-bff0-e51a12cc1d4e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d12f2e5-559b-4736-bff0-e51a12cc1d4e\") pod \"rabbitmq-server-0\" (UID: \"a569f835-2a0b-4752-8d4c-8a0c22524cfa\") " pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.277705 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.789948 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Nov 28 12:09:17 crc kubenswrapper[5030]: W1128 12:09:17.798726 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda569f835_2a0b_4752_8d4c_8a0c22524cfa.slice/crio-d0d4e7709ddc84ff77e09baab0b0275960fddb838ace90b7e04bf96d63d251bf WatchSource:0}: Error finding container d0d4e7709ddc84ff77e09baab0b0275960fddb838ace90b7e04bf96d63d251bf: Status 404 returned error can't find the container with id d0d4e7709ddc84ff77e09baab0b0275960fddb838ace90b7e04bf96d63d251bf Nov 28 12:09:17 crc kubenswrapper[5030]: I1128 12:09:17.953298 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"a569f835-2a0b-4752-8d4c-8a0c22524cfa","Type":"ContainerStarted","Data":"d0d4e7709ddc84ff77e09baab0b0275960fddb838ace90b7e04bf96d63d251bf"} Nov 28 12:09:18 crc kubenswrapper[5030]: I1128 12:09:18.048343 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5c6km" Nov 28 12:09:19 crc kubenswrapper[5030]: I1128 12:09:19.052575 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5c6km"] Nov 28 12:09:19 crc kubenswrapper[5030]: I1128 12:09:19.695040 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-65mhk" Nov 28 12:09:19 crc kubenswrapper[5030]: I1128 12:09:19.969596 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5c6km" podUID="c5a3e1f8-3b2f-44f5-b481-938543e524e4" containerName="registry-server" containerID="cri-o://dce468b4a050e0ea5997f2690e259e9e954580da463f3a80d2a5de3dd355b7d9" gracePeriod=2 Nov 28 12:09:20 crc kubenswrapper[5030]: I1128 12:09:20.463758 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-b8vrk"] Nov 28 12:09:20 crc kubenswrapper[5030]: I1128 12:09:20.464737 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-b8vrk" Nov 28 12:09:20 crc kubenswrapper[5030]: I1128 12:09:20.466862 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-clvc2" Nov 28 12:09:20 crc kubenswrapper[5030]: I1128 12:09:20.481324 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-b8vrk"] Nov 28 12:09:20 crc kubenswrapper[5030]: I1128 12:09:20.510583 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfrp6\" (UniqueName: \"kubernetes.io/projected/911d95fe-5fc4-4f07-aa44-f33c853625c6-kube-api-access-vfrp6\") pod \"keystone-operator-index-b8vrk\" (UID: \"911d95fe-5fc4-4f07-aa44-f33c853625c6\") " pod="openstack-operators/keystone-operator-index-b8vrk" Nov 28 12:09:20 crc kubenswrapper[5030]: I1128 12:09:20.611978 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfrp6\" (UniqueName: \"kubernetes.io/projected/911d95fe-5fc4-4f07-aa44-f33c853625c6-kube-api-access-vfrp6\") pod \"keystone-operator-index-b8vrk\" (UID: \"911d95fe-5fc4-4f07-aa44-f33c853625c6\") " pod="openstack-operators/keystone-operator-index-b8vrk" Nov 28 12:09:20 crc kubenswrapper[5030]: I1128 12:09:20.647519 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfrp6\" (UniqueName: \"kubernetes.io/projected/911d95fe-5fc4-4f07-aa44-f33c853625c6-kube-api-access-vfrp6\") pod \"keystone-operator-index-b8vrk\" (UID: \"911d95fe-5fc4-4f07-aa44-f33c853625c6\") " pod="openstack-operators/keystone-operator-index-b8vrk" Nov 28 12:09:20 crc kubenswrapper[5030]: I1128 12:09:20.784057 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-b8vrk" Nov 28 12:09:20 crc kubenswrapper[5030]: I1128 12:09:20.987123 5030 generic.go:334] "Generic (PLEG): container finished" podID="c5a3e1f8-3b2f-44f5-b481-938543e524e4" containerID="dce468b4a050e0ea5997f2690e259e9e954580da463f3a80d2a5de3dd355b7d9" exitCode=0 Nov 28 12:09:20 crc kubenswrapper[5030]: I1128 12:09:20.987153 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5c6km" event={"ID":"c5a3e1f8-3b2f-44f5-b481-938543e524e4","Type":"ContainerDied","Data":"dce468b4a050e0ea5997f2690e259e9e954580da463f3a80d2a5de3dd355b7d9"} Nov 28 12:09:21 crc kubenswrapper[5030]: I1128 12:09:21.449406 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5c6km" Nov 28 12:09:21 crc kubenswrapper[5030]: I1128 12:09:21.524258 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a3e1f8-3b2f-44f5-b481-938543e524e4-catalog-content\") pod \"c5a3e1f8-3b2f-44f5-b481-938543e524e4\" (UID: \"c5a3e1f8-3b2f-44f5-b481-938543e524e4\") " Nov 28 12:09:21 crc kubenswrapper[5030]: I1128 12:09:21.524317 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5mkp\" (UniqueName: \"kubernetes.io/projected/c5a3e1f8-3b2f-44f5-b481-938543e524e4-kube-api-access-t5mkp\") pod \"c5a3e1f8-3b2f-44f5-b481-938543e524e4\" (UID: \"c5a3e1f8-3b2f-44f5-b481-938543e524e4\") " Nov 28 12:09:21 crc kubenswrapper[5030]: I1128 12:09:21.524489 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a3e1f8-3b2f-44f5-b481-938543e524e4-utilities\") pod \"c5a3e1f8-3b2f-44f5-b481-938543e524e4\" (UID: \"c5a3e1f8-3b2f-44f5-b481-938543e524e4\") " Nov 28 12:09:21 crc kubenswrapper[5030]: I1128 12:09:21.527634 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a3e1f8-3b2f-44f5-b481-938543e524e4-utilities" (OuterVolumeSpecName: "utilities") pod "c5a3e1f8-3b2f-44f5-b481-938543e524e4" (UID: "c5a3e1f8-3b2f-44f5-b481-938543e524e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:09:21 crc kubenswrapper[5030]: I1128 12:09:21.529601 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a3e1f8-3b2f-44f5-b481-938543e524e4-kube-api-access-t5mkp" (OuterVolumeSpecName: "kube-api-access-t5mkp") pod "c5a3e1f8-3b2f-44f5-b481-938543e524e4" (UID: "c5a3e1f8-3b2f-44f5-b481-938543e524e4"). InnerVolumeSpecName "kube-api-access-t5mkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:09:21 crc kubenswrapper[5030]: I1128 12:09:21.552431 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a3e1f8-3b2f-44f5-b481-938543e524e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5a3e1f8-3b2f-44f5-b481-938543e524e4" (UID: "c5a3e1f8-3b2f-44f5-b481-938543e524e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:09:21 crc kubenswrapper[5030]: I1128 12:09:21.626512 5030 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a3e1f8-3b2f-44f5-b481-938543e524e4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:09:21 crc kubenswrapper[5030]: I1128 12:09:21.626563 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5mkp\" (UniqueName: \"kubernetes.io/projected/c5a3e1f8-3b2f-44f5-b481-938543e524e4-kube-api-access-t5mkp\") on node \"crc\" DevicePath \"\"" Nov 28 12:09:21 crc kubenswrapper[5030]: I1128 12:09:21.626577 5030 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a3e1f8-3b2f-44f5-b481-938543e524e4-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:09:21 crc kubenswrapper[5030]: I1128 12:09:21.999373 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5c6km" event={"ID":"c5a3e1f8-3b2f-44f5-b481-938543e524e4","Type":"ContainerDied","Data":"04cdb725cbbb0d855e143733e026d5d8dd3826273f6b97f2fb13a1284948cc5d"} Nov 28 12:09:21 crc kubenswrapper[5030]: I1128 12:09:21.999432 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5c6km" Nov 28 12:09:21 crc kubenswrapper[5030]: I1128 12:09:21.999480 5030 scope.go:117] "RemoveContainer" containerID="dce468b4a050e0ea5997f2690e259e9e954580da463f3a80d2a5de3dd355b7d9" Nov 28 12:09:22 crc kubenswrapper[5030]: I1128 12:09:22.045592 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5c6km"] Nov 28 12:09:22 crc kubenswrapper[5030]: I1128 12:09:22.054467 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5c6km"] Nov 28 12:09:22 crc kubenswrapper[5030]: I1128 12:09:22.410790 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a3e1f8-3b2f-44f5-b481-938543e524e4" path="/var/lib/kubelet/pods/c5a3e1f8-3b2f-44f5-b481-938543e524e4/volumes" Nov 28 12:09:23 crc kubenswrapper[5030]: I1128 12:09:23.371153 5030 scope.go:117] "RemoveContainer" containerID="5209d2474ea63be13c27c158a925c404d290ad6c43758bb671e2447071a544ab" Nov 28 12:09:23 crc kubenswrapper[5030]: I1128 12:09:23.596898 5030 scope.go:117] "RemoveContainer" containerID="093f2bd0b26bfe9a21f340da51ccd52ebdb46d720e81fac8cb6116fa20709ba8" Nov 28 12:09:23 crc kubenswrapper[5030]: I1128 12:09:23.700414 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-b8vrk"] Nov 28 12:09:25 crc kubenswrapper[5030]: I1128 12:09:25.023438 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-b8vrk" event={"ID":"911d95fe-5fc4-4f07-aa44-f33c853625c6","Type":"ContainerStarted","Data":"712e34c3576f1474f09a5287dc0bb7426e3ab93729ea32b798e249dd9fad4a06"} Nov 28 12:09:26 crc kubenswrapper[5030]: I1128 12:09:26.034671 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"a569f835-2a0b-4752-8d4c-8a0c22524cfa","Type":"ContainerStarted","Data":"213faf5ae9a20b16d1b31a7d812e3170b38a51da891cb1149dd6e2ea7e4f36bc"} Nov 28 12:09:26 crc kubenswrapper[5030]: I1128 12:09:26.037247 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-b8vrk" event={"ID":"911d95fe-5fc4-4f07-aa44-f33c853625c6","Type":"ContainerStarted","Data":"1a0bcd39abc2c1bf6460be50b4bbd6a6ccec5170cf04e0a95717b9ce37df9815"} Nov 28 12:09:26 crc kubenswrapper[5030]: I1128 12:09:26.054965 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-65mhk"] Nov 28 12:09:26 crc kubenswrapper[5030]: I1128 12:09:26.055300 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-65mhk" podUID="f9c576c5-1a1a-4dd0-9370-12c58da88047" containerName="registry-server" containerID="cri-o://5cc971f78497ff506a614e6cb72606ff06bb7017bd8d3d4ecdf6f08052dd38a7" gracePeriod=2 Nov 28 12:09:26 crc kubenswrapper[5030]: I1128 12:09:26.557828 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65mhk" Nov 28 12:09:26 crc kubenswrapper[5030]: I1128 12:09:26.584947 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-b8vrk" podStartSLOduration=5.58763872 podStartE2EDuration="6.584917986s" podCreationTimestamp="2025-11-28 12:09:20 +0000 UTC" firstStartedPulling="2025-11-28 12:09:24.521982906 +0000 UTC m=+982.463725589" lastFinishedPulling="2025-11-28 12:09:25.519262162 +0000 UTC m=+983.461004855" observedRunningTime="2025-11-28 12:09:26.095248706 +0000 UTC m=+984.036991389" watchObservedRunningTime="2025-11-28 12:09:26.584917986 +0000 UTC m=+984.526660709" Nov 28 12:09:26 crc kubenswrapper[5030]: I1128 12:09:26.756875 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4wvd\" (UniqueName: \"kubernetes.io/projected/f9c576c5-1a1a-4dd0-9370-12c58da88047-kube-api-access-g4wvd\") pod \"f9c576c5-1a1a-4dd0-9370-12c58da88047\" (UID: \"f9c576c5-1a1a-4dd0-9370-12c58da88047\") " Nov 28 12:09:26 crc kubenswrapper[5030]: I1128 12:09:26.756960 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c576c5-1a1a-4dd0-9370-12c58da88047-catalog-content\") pod \"f9c576c5-1a1a-4dd0-9370-12c58da88047\" (UID: \"f9c576c5-1a1a-4dd0-9370-12c58da88047\") " Nov 28 12:09:26 crc kubenswrapper[5030]: I1128 12:09:26.757071 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c576c5-1a1a-4dd0-9370-12c58da88047-utilities\") pod \"f9c576c5-1a1a-4dd0-9370-12c58da88047\" (UID: \"f9c576c5-1a1a-4dd0-9370-12c58da88047\") " Nov 28 12:09:26 crc kubenswrapper[5030]: I1128 12:09:26.758754 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9c576c5-1a1a-4dd0-9370-12c58da88047-utilities" (OuterVolumeSpecName: "utilities") pod "f9c576c5-1a1a-4dd0-9370-12c58da88047" (UID: "f9c576c5-1a1a-4dd0-9370-12c58da88047"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:09:26 crc kubenswrapper[5030]: I1128 12:09:26.768736 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c576c5-1a1a-4dd0-9370-12c58da88047-kube-api-access-g4wvd" (OuterVolumeSpecName: "kube-api-access-g4wvd") pod "f9c576c5-1a1a-4dd0-9370-12c58da88047" (UID: "f9c576c5-1a1a-4dd0-9370-12c58da88047"). InnerVolumeSpecName "kube-api-access-g4wvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:09:26 crc kubenswrapper[5030]: I1128 12:09:26.823408 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9c576c5-1a1a-4dd0-9370-12c58da88047-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9c576c5-1a1a-4dd0-9370-12c58da88047" (UID: "f9c576c5-1a1a-4dd0-9370-12c58da88047"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:09:26 crc kubenswrapper[5030]: I1128 12:09:26.858804 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4wvd\" (UniqueName: \"kubernetes.io/projected/f9c576c5-1a1a-4dd0-9370-12c58da88047-kube-api-access-g4wvd\") on node \"crc\" DevicePath \"\"" Nov 28 12:09:26 crc kubenswrapper[5030]: I1128 12:09:26.858839 5030 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c576c5-1a1a-4dd0-9370-12c58da88047-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:09:26 crc kubenswrapper[5030]: I1128 12:09:26.858852 5030 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c576c5-1a1a-4dd0-9370-12c58da88047-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:09:27 crc kubenswrapper[5030]: I1128 12:09:27.057992 5030 generic.go:334] "Generic (PLEG): container finished" podID="f9c576c5-1a1a-4dd0-9370-12c58da88047" containerID="5cc971f78497ff506a614e6cb72606ff06bb7017bd8d3d4ecdf6f08052dd38a7" exitCode=0 Nov 28 12:09:27 crc kubenswrapper[5030]: I1128 12:09:27.058793 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65mhk" Nov 28 12:09:27 crc kubenswrapper[5030]: I1128 12:09:27.058871 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65mhk" event={"ID":"f9c576c5-1a1a-4dd0-9370-12c58da88047","Type":"ContainerDied","Data":"5cc971f78497ff506a614e6cb72606ff06bb7017bd8d3d4ecdf6f08052dd38a7"} Nov 28 12:09:27 crc kubenswrapper[5030]: I1128 12:09:27.058965 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65mhk" event={"ID":"f9c576c5-1a1a-4dd0-9370-12c58da88047","Type":"ContainerDied","Data":"8210781231bbf2dfd383f213cc2867a395d61c38396ab1160203052713aeb8fe"} Nov 28 12:09:27 crc kubenswrapper[5030]: I1128 12:09:27.059012 5030 scope.go:117] "RemoveContainer" containerID="5cc971f78497ff506a614e6cb72606ff06bb7017bd8d3d4ecdf6f08052dd38a7" Nov 28 12:09:27 crc kubenswrapper[5030]: I1128 12:09:27.092066 5030 scope.go:117] "RemoveContainer" containerID="4f78a1918de137dbe8143bef440a41357269dafd771b1c6388570c56bce1a397" Nov 28 12:09:27 crc kubenswrapper[5030]: I1128 12:09:27.101980 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-65mhk"] Nov 28 12:09:27 crc kubenswrapper[5030]: I1128 12:09:27.111535 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-65mhk"] Nov 28 12:09:27 crc kubenswrapper[5030]: I1128 12:09:27.125386 5030 scope.go:117] "RemoveContainer" containerID="4df2ea97fa2159e2e309f0ff05d0301a66a712228c8662f67d26e2fff24094d0" Nov 28 12:09:27 crc kubenswrapper[5030]: I1128 12:09:27.154208 5030 scope.go:117] "RemoveContainer" containerID="5cc971f78497ff506a614e6cb72606ff06bb7017bd8d3d4ecdf6f08052dd38a7" Nov 28 12:09:27 crc kubenswrapper[5030]: E1128 12:09:27.154810 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cc971f78497ff506a614e6cb72606ff06bb7017bd8d3d4ecdf6f08052dd38a7\": container with ID starting with 5cc971f78497ff506a614e6cb72606ff06bb7017bd8d3d4ecdf6f08052dd38a7 not found: ID does not exist" containerID="5cc971f78497ff506a614e6cb72606ff06bb7017bd8d3d4ecdf6f08052dd38a7" Nov 28 12:09:27 crc kubenswrapper[5030]: I1128 12:09:27.154920 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cc971f78497ff506a614e6cb72606ff06bb7017bd8d3d4ecdf6f08052dd38a7"} err="failed to get container status \"5cc971f78497ff506a614e6cb72606ff06bb7017bd8d3d4ecdf6f08052dd38a7\": rpc error: code = NotFound desc = could not find container \"5cc971f78497ff506a614e6cb72606ff06bb7017bd8d3d4ecdf6f08052dd38a7\": container with ID starting with 5cc971f78497ff506a614e6cb72606ff06bb7017bd8d3d4ecdf6f08052dd38a7 not found: ID does not exist" Nov 28 12:09:27 crc kubenswrapper[5030]: I1128 12:09:27.155008 5030 scope.go:117] "RemoveContainer" containerID="4f78a1918de137dbe8143bef440a41357269dafd771b1c6388570c56bce1a397" Nov 28 12:09:27 crc kubenswrapper[5030]: E1128 12:09:27.155614 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f78a1918de137dbe8143bef440a41357269dafd771b1c6388570c56bce1a397\": container with ID starting with 4f78a1918de137dbe8143bef440a41357269dafd771b1c6388570c56bce1a397 not found: ID does not exist" containerID="4f78a1918de137dbe8143bef440a41357269dafd771b1c6388570c56bce1a397" Nov 28 12:09:27 crc kubenswrapper[5030]: I1128 12:09:27.155696 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f78a1918de137dbe8143bef440a41357269dafd771b1c6388570c56bce1a397"} err="failed to get container status \"4f78a1918de137dbe8143bef440a41357269dafd771b1c6388570c56bce1a397\": rpc error: code = NotFound desc = could not find container \"4f78a1918de137dbe8143bef440a41357269dafd771b1c6388570c56bce1a397\": container with ID starting with 4f78a1918de137dbe8143bef440a41357269dafd771b1c6388570c56bce1a397 not found: ID does not exist" Nov 28 12:09:27 crc kubenswrapper[5030]: I1128 12:09:27.155791 5030 scope.go:117] "RemoveContainer" containerID="4df2ea97fa2159e2e309f0ff05d0301a66a712228c8662f67d26e2fff24094d0" Nov 28 12:09:27 crc kubenswrapper[5030]: E1128 12:09:27.156231 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4df2ea97fa2159e2e309f0ff05d0301a66a712228c8662f67d26e2fff24094d0\": container with ID starting with 4df2ea97fa2159e2e309f0ff05d0301a66a712228c8662f67d26e2fff24094d0 not found: ID does not exist" containerID="4df2ea97fa2159e2e309f0ff05d0301a66a712228c8662f67d26e2fff24094d0" Nov 28 12:09:27 crc kubenswrapper[5030]: I1128 12:09:27.156298 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df2ea97fa2159e2e309f0ff05d0301a66a712228c8662f67d26e2fff24094d0"} err="failed to get container status \"4df2ea97fa2159e2e309f0ff05d0301a66a712228c8662f67d26e2fff24094d0\": rpc error: code = NotFound desc = could not find container \"4df2ea97fa2159e2e309f0ff05d0301a66a712228c8662f67d26e2fff24094d0\": container with ID starting with 4df2ea97fa2159e2e309f0ff05d0301a66a712228c8662f67d26e2fff24094d0 not found: ID does not exist" Nov 28 12:09:28 crc kubenswrapper[5030]: I1128 12:09:28.410242 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9c576c5-1a1a-4dd0-9370-12c58da88047" path="/var/lib/kubelet/pods/f9c576c5-1a1a-4dd0-9370-12c58da88047/volumes" Nov 28 12:09:30 crc kubenswrapper[5030]: I1128 12:09:30.785331 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-b8vrk" Nov 28 12:09:30 crc kubenswrapper[5030]: I1128 12:09:30.786671 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-b8vrk" Nov 28 12:09:30 crc kubenswrapper[5030]: I1128 12:09:30.847731 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-b8vrk" Nov 28 12:09:31 crc kubenswrapper[5030]: I1128 12:09:31.150213 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-b8vrk" Nov 28 12:09:35 crc kubenswrapper[5030]: I1128 12:09:35.930732 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp"] Nov 28 12:09:35 crc kubenswrapper[5030]: E1128 12:09:35.931604 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c576c5-1a1a-4dd0-9370-12c58da88047" containerName="extract-content" Nov 28 12:09:35 crc kubenswrapper[5030]: I1128 12:09:35.931629 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c576c5-1a1a-4dd0-9370-12c58da88047" containerName="extract-content" Nov 28 12:09:35 crc kubenswrapper[5030]: E1128 12:09:35.931657 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a3e1f8-3b2f-44f5-b481-938543e524e4" containerName="extract-utilities" Nov 28 12:09:35 crc kubenswrapper[5030]: I1128 12:09:35.931671 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a3e1f8-3b2f-44f5-b481-938543e524e4" containerName="extract-utilities" Nov 28 12:09:35 crc kubenswrapper[5030]: E1128 12:09:35.931698 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a3e1f8-3b2f-44f5-b481-938543e524e4" containerName="registry-server" Nov 28 12:09:35 crc kubenswrapper[5030]: I1128 12:09:35.931714 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a3e1f8-3b2f-44f5-b481-938543e524e4" containerName="registry-server" Nov 28 12:09:35 crc kubenswrapper[5030]: E1128 12:09:35.931729 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c576c5-1a1a-4dd0-9370-12c58da88047" containerName="registry-server" Nov 28 12:09:35 crc kubenswrapper[5030]: I1128 12:09:35.931743 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c576c5-1a1a-4dd0-9370-12c58da88047" containerName="registry-server" Nov 28 12:09:35 crc kubenswrapper[5030]: E1128 12:09:35.931865 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c576c5-1a1a-4dd0-9370-12c58da88047" containerName="extract-utilities" Nov 28 12:09:35 crc kubenswrapper[5030]: I1128 12:09:35.931915 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c576c5-1a1a-4dd0-9370-12c58da88047" containerName="extract-utilities" Nov 28 12:09:35 crc kubenswrapper[5030]: E1128 12:09:35.931947 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a3e1f8-3b2f-44f5-b481-938543e524e4" containerName="extract-content" Nov 28 12:09:35 crc kubenswrapper[5030]: I1128 12:09:35.931954 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a3e1f8-3b2f-44f5-b481-938543e524e4" containerName="extract-content" Nov 28 12:09:35 crc kubenswrapper[5030]: I1128 12:09:35.932299 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9c576c5-1a1a-4dd0-9370-12c58da88047" containerName="registry-server" Nov 28 12:09:35 crc kubenswrapper[5030]: I1128 12:09:35.932320 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a3e1f8-3b2f-44f5-b481-938543e524e4" containerName="registry-server" Nov 28 12:09:35 crc kubenswrapper[5030]: I1128 12:09:35.933678 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" Nov 28 12:09:35 crc kubenswrapper[5030]: I1128 12:09:35.936946 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-br5mr" Nov 28 12:09:35 crc kubenswrapper[5030]: I1128 12:09:35.974209 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp"] Nov 28 12:09:36 crc kubenswrapper[5030]: I1128 12:09:36.034965 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4a3b7f5-6933-4be3-ae18-394be8bb4cf6-util\") pod \"d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp\" (UID: \"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6\") " pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" Nov 28 12:09:36 crc kubenswrapper[5030]: I1128 12:09:36.035188 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4a3b7f5-6933-4be3-ae18-394be8bb4cf6-bundle\") pod \"d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp\" (UID: \"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6\") " pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" Nov 28 12:09:36 crc kubenswrapper[5030]: I1128 12:09:36.035292 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz88l\" (UniqueName: \"kubernetes.io/projected/e4a3b7f5-6933-4be3-ae18-394be8bb4cf6-kube-api-access-jz88l\") pod \"d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp\" (UID: \"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6\") " pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" Nov 28 12:09:36 crc kubenswrapper[5030]: I1128 12:09:36.138010 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4a3b7f5-6933-4be3-ae18-394be8bb4cf6-bundle\") pod \"d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp\" (UID: \"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6\") " pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" Nov 28 12:09:36 crc kubenswrapper[5030]: I1128 12:09:36.138733 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz88l\" (UniqueName: \"kubernetes.io/projected/e4a3b7f5-6933-4be3-ae18-394be8bb4cf6-kube-api-access-jz88l\") pod \"d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp\" (UID: \"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6\") " pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" Nov 28 12:09:36 crc kubenswrapper[5030]: I1128 12:09:36.138943 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4a3b7f5-6933-4be3-ae18-394be8bb4cf6-bundle\") pod \"d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp\" (UID: \"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6\") " pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" Nov 28 12:09:36 crc kubenswrapper[5030]: I1128 12:09:36.139100 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4a3b7f5-6933-4be3-ae18-394be8bb4cf6-util\") pod \"d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp\" (UID: \"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6\") " pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" Nov 28 12:09:36 crc kubenswrapper[5030]: I1128 12:09:36.139678 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4a3b7f5-6933-4be3-ae18-394be8bb4cf6-util\") pod \"d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp\" (UID: \"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6\") " pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" Nov 28 12:09:36 crc kubenswrapper[5030]: I1128 12:09:36.181270 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz88l\" (UniqueName: \"kubernetes.io/projected/e4a3b7f5-6933-4be3-ae18-394be8bb4cf6-kube-api-access-jz88l\") pod \"d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp\" (UID: \"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6\") " pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" Nov 28 12:09:36 crc kubenswrapper[5030]: I1128 12:09:36.271394 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" Nov 28 12:09:36 crc kubenswrapper[5030]: I1128 12:09:36.799424 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp"] Nov 28 12:09:37 crc kubenswrapper[5030]: I1128 12:09:37.157527 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" event={"ID":"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6","Type":"ContainerStarted","Data":"f240edca1ec9d78eea3693c2c642d0b35ae51f63b114d7f41c77841620f92009"} Nov 28 12:09:37 crc kubenswrapper[5030]: I1128 12:09:37.157967 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" event={"ID":"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6","Type":"ContainerStarted","Data":"391dffd5b6a19a8f0a578d7ad69dd94c674d59e852053b9010175082b104f609"} Nov 28 12:09:37 crc kubenswrapper[5030]: E1128 12:09:37.192729 5030 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4a3b7f5_6933_4be3_ae18_394be8bb4cf6.slice/crio-conmon-f240edca1ec9d78eea3693c2c642d0b35ae51f63b114d7f41c77841620f92009.scope\": RecentStats: unable to find data in memory cache]" Nov 28 12:09:38 crc kubenswrapper[5030]: I1128 12:09:38.168504 5030 generic.go:334] "Generic (PLEG): container finished" podID="e4a3b7f5-6933-4be3-ae18-394be8bb4cf6" containerID="f240edca1ec9d78eea3693c2c642d0b35ae51f63b114d7f41c77841620f92009" exitCode=0 Nov 28 12:09:38 crc kubenswrapper[5030]: I1128 12:09:38.168641 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" event={"ID":"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6","Type":"ContainerDied","Data":"f240edca1ec9d78eea3693c2c642d0b35ae51f63b114d7f41c77841620f92009"} Nov 28 12:09:40 crc kubenswrapper[5030]: I1128 12:09:40.194491 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" event={"ID":"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6","Type":"ContainerStarted","Data":"6a15a37d5e3ee88f7faef15800b10fb3f05ceaaae94befd98fd0f46451cdbdf6"} Nov 28 12:09:41 crc kubenswrapper[5030]: I1128 12:09:41.204347 5030 generic.go:334] "Generic (PLEG): container finished" podID="e4a3b7f5-6933-4be3-ae18-394be8bb4cf6" containerID="6a15a37d5e3ee88f7faef15800b10fb3f05ceaaae94befd98fd0f46451cdbdf6" exitCode=0 Nov 28 12:09:41 crc kubenswrapper[5030]: I1128 12:09:41.204453 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" event={"ID":"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6","Type":"ContainerDied","Data":"6a15a37d5e3ee88f7faef15800b10fb3f05ceaaae94befd98fd0f46451cdbdf6"} Nov 28 12:09:42 crc kubenswrapper[5030]: I1128 12:09:42.216550 5030 generic.go:334] "Generic (PLEG): container finished" podID="e4a3b7f5-6933-4be3-ae18-394be8bb4cf6" containerID="0f7e80bd94f9ab77c020ccd53e7903d332183d5f61be70673c168483e4a8dd66" exitCode=0 Nov 28 12:09:42 crc kubenswrapper[5030]: I1128 12:09:42.216702 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" event={"ID":"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6","Type":"ContainerDied","Data":"0f7e80bd94f9ab77c020ccd53e7903d332183d5f61be70673c168483e4a8dd66"} Nov 28 12:09:43 crc kubenswrapper[5030]: I1128 12:09:43.527455 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" Nov 28 12:09:43 crc kubenswrapper[5030]: I1128 12:09:43.670770 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4a3b7f5-6933-4be3-ae18-394be8bb4cf6-util\") pod \"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6\" (UID: \"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6\") " Nov 28 12:09:43 crc kubenswrapper[5030]: I1128 12:09:43.671441 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz88l\" (UniqueName: \"kubernetes.io/projected/e4a3b7f5-6933-4be3-ae18-394be8bb4cf6-kube-api-access-jz88l\") pod \"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6\" (UID: \"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6\") " Nov 28 12:09:43 crc kubenswrapper[5030]: I1128 12:09:43.671646 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4a3b7f5-6933-4be3-ae18-394be8bb4cf6-bundle\") pod \"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6\" (UID: \"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6\") " Nov 28 12:09:43 crc kubenswrapper[5030]: I1128 12:09:43.681614 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a3b7f5-6933-4be3-ae18-394be8bb4cf6-kube-api-access-jz88l" (OuterVolumeSpecName: "kube-api-access-jz88l") pod "e4a3b7f5-6933-4be3-ae18-394be8bb4cf6" (UID: "e4a3b7f5-6933-4be3-ae18-394be8bb4cf6"). InnerVolumeSpecName "kube-api-access-jz88l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:09:43 crc kubenswrapper[5030]: I1128 12:09:43.681840 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4a3b7f5-6933-4be3-ae18-394be8bb4cf6-bundle" (OuterVolumeSpecName: "bundle") pod "e4a3b7f5-6933-4be3-ae18-394be8bb4cf6" (UID: "e4a3b7f5-6933-4be3-ae18-394be8bb4cf6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:09:43 crc kubenswrapper[5030]: I1128 12:09:43.684104 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4a3b7f5-6933-4be3-ae18-394be8bb4cf6-util" (OuterVolumeSpecName: "util") pod "e4a3b7f5-6933-4be3-ae18-394be8bb4cf6" (UID: "e4a3b7f5-6933-4be3-ae18-394be8bb4cf6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:09:43 crc kubenswrapper[5030]: I1128 12:09:43.774041 5030 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4a3b7f5-6933-4be3-ae18-394be8bb4cf6-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:09:43 crc kubenswrapper[5030]: I1128 12:09:43.774081 5030 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4a3b7f5-6933-4be3-ae18-394be8bb4cf6-util\") on node \"crc\" DevicePath \"\"" Nov 28 12:09:43 crc kubenswrapper[5030]: I1128 12:09:43.774092 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz88l\" (UniqueName: \"kubernetes.io/projected/e4a3b7f5-6933-4be3-ae18-394be8bb4cf6-kube-api-access-jz88l\") on node \"crc\" DevicePath \"\"" Nov 28 12:09:44 crc kubenswrapper[5030]: I1128 12:09:44.243945 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" event={"ID":"e4a3b7f5-6933-4be3-ae18-394be8bb4cf6","Type":"ContainerDied","Data":"391dffd5b6a19a8f0a578d7ad69dd94c674d59e852053b9010175082b104f609"} Nov 28 12:09:44 crc kubenswrapper[5030]: I1128 12:09:44.244009 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="391dffd5b6a19a8f0a578d7ad69dd94c674d59e852053b9010175082b104f609" Nov 28 12:09:44 crc kubenswrapper[5030]: I1128 12:09:44.244044 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp" Nov 28 12:09:59 crc kubenswrapper[5030]: I1128 12:09:59.378170 5030 generic.go:334] "Generic (PLEG): container finished" podID="a569f835-2a0b-4752-8d4c-8a0c22524cfa" containerID="213faf5ae9a20b16d1b31a7d812e3170b38a51da891cb1149dd6e2ea7e4f36bc" exitCode=0 Nov 28 12:09:59 crc kubenswrapper[5030]: I1128 12:09:59.378294 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"a569f835-2a0b-4752-8d4c-8a0c22524cfa","Type":"ContainerDied","Data":"213faf5ae9a20b16d1b31a7d812e3170b38a51da891cb1149dd6e2ea7e4f36bc"} Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.406109 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"a569f835-2a0b-4752-8d4c-8a0c22524cfa","Type":"ContainerStarted","Data":"bd892b4c1f96de5cc1f4f4be8f3c705ae076a98617b0a75a6db6e7e10ada6729"} Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.444393 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/rabbitmq-server-0" podStartSLOduration=38.649888742 podStartE2EDuration="45.444369219s" podCreationTimestamp="2025-11-28 12:09:15 +0000 UTC" firstStartedPulling="2025-11-28 12:09:17.803841382 +0000 UTC m=+975.745584105" lastFinishedPulling="2025-11-28 12:09:24.598321899 +0000 UTC m=+982.540064582" observedRunningTime="2025-11-28 12:10:00.440536605 +0000 UTC m=+1018.382279298" watchObservedRunningTime="2025-11-28 12:10:00.444369219 +0000 UTC m=+1018.386111912" Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.572097 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-54f75d97f-lbqxb"] Nov 28 12:10:00 crc kubenswrapper[5030]: E1128 12:10:00.572453 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a3b7f5-6933-4be3-ae18-394be8bb4cf6" containerName="extract" Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.572485 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a3b7f5-6933-4be3-ae18-394be8bb4cf6" containerName="extract" Nov 28 12:10:00 crc kubenswrapper[5030]: E1128 12:10:00.572504 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a3b7f5-6933-4be3-ae18-394be8bb4cf6" containerName="util" Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.572512 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a3b7f5-6933-4be3-ae18-394be8bb4cf6" containerName="util" Nov 28 12:10:00 crc kubenswrapper[5030]: E1128 12:10:00.572524 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a3b7f5-6933-4be3-ae18-394be8bb4cf6" containerName="pull" Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.572530 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a3b7f5-6933-4be3-ae18-394be8bb4cf6" containerName="pull" Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.572658 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a3b7f5-6933-4be3-ae18-394be8bb4cf6" containerName="extract" Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.573178 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-54f75d97f-lbqxb" Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.576790 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-lwb47" Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.577261 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.612033 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-54f75d97f-lbqxb"] Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.759595 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c79a1b48-ab32-4ab9-9226-54677c98d72c-apiservice-cert\") pod \"keystone-operator-controller-manager-54f75d97f-lbqxb\" (UID: \"c79a1b48-ab32-4ab9-9226-54677c98d72c\") " pod="openstack-operators/keystone-operator-controller-manager-54f75d97f-lbqxb" Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.759728 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c79a1b48-ab32-4ab9-9226-54677c98d72c-webhook-cert\") pod \"keystone-operator-controller-manager-54f75d97f-lbqxb\" (UID: \"c79a1b48-ab32-4ab9-9226-54677c98d72c\") " pod="openstack-operators/keystone-operator-controller-manager-54f75d97f-lbqxb" Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.759776 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w2nr\" (UniqueName: \"kubernetes.io/projected/c79a1b48-ab32-4ab9-9226-54677c98d72c-kube-api-access-5w2nr\") pod \"keystone-operator-controller-manager-54f75d97f-lbqxb\" (UID: \"c79a1b48-ab32-4ab9-9226-54677c98d72c\") " pod="openstack-operators/keystone-operator-controller-manager-54f75d97f-lbqxb" Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.861877 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c79a1b48-ab32-4ab9-9226-54677c98d72c-apiservice-cert\") pod \"keystone-operator-controller-manager-54f75d97f-lbqxb\" (UID: \"c79a1b48-ab32-4ab9-9226-54677c98d72c\") " pod="openstack-operators/keystone-operator-controller-manager-54f75d97f-lbqxb" Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.861965 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c79a1b48-ab32-4ab9-9226-54677c98d72c-webhook-cert\") pod \"keystone-operator-controller-manager-54f75d97f-lbqxb\" (UID: \"c79a1b48-ab32-4ab9-9226-54677c98d72c\") " pod="openstack-operators/keystone-operator-controller-manager-54f75d97f-lbqxb" Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.861996 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w2nr\" (UniqueName: \"kubernetes.io/projected/c79a1b48-ab32-4ab9-9226-54677c98d72c-kube-api-access-5w2nr\") pod \"keystone-operator-controller-manager-54f75d97f-lbqxb\" (UID: \"c79a1b48-ab32-4ab9-9226-54677c98d72c\") " pod="openstack-operators/keystone-operator-controller-manager-54f75d97f-lbqxb" Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.867798 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c79a1b48-ab32-4ab9-9226-54677c98d72c-webhook-cert\") pod \"keystone-operator-controller-manager-54f75d97f-lbqxb\" (UID: \"c79a1b48-ab32-4ab9-9226-54677c98d72c\") " pod="openstack-operators/keystone-operator-controller-manager-54f75d97f-lbqxb" Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.867873 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c79a1b48-ab32-4ab9-9226-54677c98d72c-apiservice-cert\") pod \"keystone-operator-controller-manager-54f75d97f-lbqxb\" (UID: \"c79a1b48-ab32-4ab9-9226-54677c98d72c\") " pod="openstack-operators/keystone-operator-controller-manager-54f75d97f-lbqxb" Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.886491 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w2nr\" (UniqueName: \"kubernetes.io/projected/c79a1b48-ab32-4ab9-9226-54677c98d72c-kube-api-access-5w2nr\") pod \"keystone-operator-controller-manager-54f75d97f-lbqxb\" (UID: \"c79a1b48-ab32-4ab9-9226-54677c98d72c\") " pod="openstack-operators/keystone-operator-controller-manager-54f75d97f-lbqxb" Nov 28 12:10:00 crc kubenswrapper[5030]: I1128 12:10:00.901733 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-54f75d97f-lbqxb" Nov 28 12:10:01 crc kubenswrapper[5030]: I1128 12:10:01.342772 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-54f75d97f-lbqxb"] Nov 28 12:10:01 crc kubenswrapper[5030]: I1128 12:10:01.406813 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-54f75d97f-lbqxb" event={"ID":"c79a1b48-ab32-4ab9-9226-54677c98d72c","Type":"ContainerStarted","Data":"9a117bad26c3fea8659ef413e58479c11b51745f3537cba9434e1e6bacfcda2f"} Nov 28 12:10:06 crc kubenswrapper[5030]: I1128 12:10:06.448703 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-54f75d97f-lbqxb" event={"ID":"c79a1b48-ab32-4ab9-9226-54677c98d72c","Type":"ContainerStarted","Data":"d0053654eba78fbb5740751c25eac4209f09678ebec38be0d40ee4c3771554af"} Nov 28 12:10:06 crc kubenswrapper[5030]: I1128 12:10:06.449508 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-54f75d97f-lbqxb" Nov 28 12:10:06 crc kubenswrapper[5030]: I1128 12:10:06.469999 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-54f75d97f-lbqxb" podStartSLOduration=2.545140612 podStartE2EDuration="6.469980158s" podCreationTimestamp="2025-11-28 12:10:00 +0000 UTC" firstStartedPulling="2025-11-28 12:10:01.350853559 +0000 UTC m=+1019.292596242" lastFinishedPulling="2025-11-28 12:10:05.275693105 +0000 UTC m=+1023.217435788" observedRunningTime="2025-11-28 12:10:06.467749867 +0000 UTC m=+1024.409492550" watchObservedRunningTime="2025-11-28 12:10:06.469980158 +0000 UTC m=+1024.411722841" Nov 28 12:10:07 crc kubenswrapper[5030]: I1128 12:10:07.278460 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:10:10 crc kubenswrapper[5030]: I1128 12:10:10.907947 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-54f75d97f-lbqxb" Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.474684 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-db-create-9w92v"] Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.476438 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-9w92v" Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.487454 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9"] Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.489031 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9" Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.499757 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-db-secret" Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.502812 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-create-9w92v"] Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.535622 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9"] Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.660131 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a579a17-723b-491c-8e33-ce15cb47f3f3-operator-scripts\") pod \"keystone-d06f-account-create-update-dwxq9\" (UID: \"3a579a17-723b-491c-8e33-ce15cb47f3f3\") " pod="glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9" Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.660201 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mnlz\" (UniqueName: \"kubernetes.io/projected/3a579a17-723b-491c-8e33-ce15cb47f3f3-kube-api-access-2mnlz\") pod \"keystone-d06f-account-create-update-dwxq9\" (UID: \"3a579a17-723b-491c-8e33-ce15cb47f3f3\") " pod="glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9" Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.660244 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7ssl\" (UniqueName: \"kubernetes.io/projected/c245622d-f12e-4906-b7e5-180b9dc50229-kube-api-access-r7ssl\") pod \"keystone-db-create-9w92v\" (UID: \"c245622d-f12e-4906-b7e5-180b9dc50229\") " pod="glance-kuttl-tests/keystone-db-create-9w92v" Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.660285 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c245622d-f12e-4906-b7e5-180b9dc50229-operator-scripts\") pod \"keystone-db-create-9w92v\" (UID: \"c245622d-f12e-4906-b7e5-180b9dc50229\") " pod="glance-kuttl-tests/keystone-db-create-9w92v" Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.761501 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a579a17-723b-491c-8e33-ce15cb47f3f3-operator-scripts\") pod \"keystone-d06f-account-create-update-dwxq9\" (UID: \"3a579a17-723b-491c-8e33-ce15cb47f3f3\") " pod="glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9" Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.761563 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mnlz\" (UniqueName: \"kubernetes.io/projected/3a579a17-723b-491c-8e33-ce15cb47f3f3-kube-api-access-2mnlz\") pod \"keystone-d06f-account-create-update-dwxq9\" (UID: \"3a579a17-723b-491c-8e33-ce15cb47f3f3\") " pod="glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9" Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.761616 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7ssl\" (UniqueName: \"kubernetes.io/projected/c245622d-f12e-4906-b7e5-180b9dc50229-kube-api-access-r7ssl\") pod \"keystone-db-create-9w92v\" (UID: \"c245622d-f12e-4906-b7e5-180b9dc50229\") " pod="glance-kuttl-tests/keystone-db-create-9w92v" Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.761668 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c245622d-f12e-4906-b7e5-180b9dc50229-operator-scripts\") pod \"keystone-db-create-9w92v\" (UID: \"c245622d-f12e-4906-b7e5-180b9dc50229\") " pod="glance-kuttl-tests/keystone-db-create-9w92v" Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.762680 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c245622d-f12e-4906-b7e5-180b9dc50229-operator-scripts\") pod \"keystone-db-create-9w92v\" (UID: \"c245622d-f12e-4906-b7e5-180b9dc50229\") " pod="glance-kuttl-tests/keystone-db-create-9w92v" Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.762749 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a579a17-723b-491c-8e33-ce15cb47f3f3-operator-scripts\") pod \"keystone-d06f-account-create-update-dwxq9\" (UID: \"3a579a17-723b-491c-8e33-ce15cb47f3f3\") " pod="glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9" Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.781308 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7ssl\" (UniqueName: \"kubernetes.io/projected/c245622d-f12e-4906-b7e5-180b9dc50229-kube-api-access-r7ssl\") pod \"keystone-db-create-9w92v\" (UID: \"c245622d-f12e-4906-b7e5-180b9dc50229\") " pod="glance-kuttl-tests/keystone-db-create-9w92v" Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.793732 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mnlz\" (UniqueName: \"kubernetes.io/projected/3a579a17-723b-491c-8e33-ce15cb47f3f3-kube-api-access-2mnlz\") pod \"keystone-d06f-account-create-update-dwxq9\" (UID: \"3a579a17-723b-491c-8e33-ce15cb47f3f3\") " pod="glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9" Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.805431 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-9w92v" Nov 28 12:10:16 crc kubenswrapper[5030]: I1128 12:10:16.828061 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9" Nov 28 12:10:17 crc kubenswrapper[5030]: I1128 12:10:17.046256 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-create-9w92v"] Nov 28 12:10:17 crc kubenswrapper[5030]: I1128 12:10:17.115987 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9"] Nov 28 12:10:17 crc kubenswrapper[5030]: W1128 12:10:17.118680 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a579a17_723b_491c_8e33_ce15cb47f3f3.slice/crio-b00cf32d2f51a3310abcbef3d6a32df0bc683d1e6ade7f60a8224c3feda4bc59 WatchSource:0}: Error finding container b00cf32d2f51a3310abcbef3d6a32df0bc683d1e6ade7f60a8224c3feda4bc59: Status 404 returned error can't find the container with id b00cf32d2f51a3310abcbef3d6a32df0bc683d1e6ade7f60a8224c3feda4bc59 Nov 28 12:10:17 crc kubenswrapper[5030]: I1128 12:10:17.284197 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/rabbitmq-server-0" Nov 28 12:10:17 crc kubenswrapper[5030]: I1128 12:10:17.555500 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-9w92v" event={"ID":"c245622d-f12e-4906-b7e5-180b9dc50229","Type":"ContainerStarted","Data":"daf3208436221dcb518e66afdbe3765adf19f2378b5bd682dbddcf960656e412"} Nov 28 12:10:17 crc kubenswrapper[5030]: I1128 12:10:17.555559 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-9w92v" event={"ID":"c245622d-f12e-4906-b7e5-180b9dc50229","Type":"ContainerStarted","Data":"a7ee68d0ac239c76861369544f8bde825f1efe19465e279572bccef5effda85c"} Nov 28 12:10:17 crc kubenswrapper[5030]: I1128 12:10:17.557954 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9" event={"ID":"3a579a17-723b-491c-8e33-ce15cb47f3f3","Type":"ContainerStarted","Data":"e6dfc2ac5429186f0d3b257f50a065f702c63fe8e77c8d0396e4240cca32561a"} Nov 28 12:10:17 crc kubenswrapper[5030]: I1128 12:10:17.557980 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9" event={"ID":"3a579a17-723b-491c-8e33-ce15cb47f3f3","Type":"ContainerStarted","Data":"b00cf32d2f51a3310abcbef3d6a32df0bc683d1e6ade7f60a8224c3feda4bc59"} Nov 28 12:10:17 crc kubenswrapper[5030]: I1128 12:10:17.576658 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-db-create-9w92v" podStartSLOduration=1.576640155 podStartE2EDuration="1.576640155s" podCreationTimestamp="2025-11-28 12:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:10:17.571583538 +0000 UTC m=+1035.513326221" watchObservedRunningTime="2025-11-28 12:10:17.576640155 +0000 UTC m=+1035.518382838" Nov 28 12:10:17 crc kubenswrapper[5030]: I1128 12:10:17.598186 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9" podStartSLOduration=1.598166867 podStartE2EDuration="1.598166867s" podCreationTimestamp="2025-11-28 12:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:10:17.592692309 +0000 UTC m=+1035.534434992" watchObservedRunningTime="2025-11-28 12:10:17.598166867 +0000 UTC m=+1035.539909550" Nov 28 12:10:18 crc kubenswrapper[5030]: I1128 12:10:18.566611 5030 generic.go:334] "Generic (PLEG): container finished" podID="3a579a17-723b-491c-8e33-ce15cb47f3f3" containerID="e6dfc2ac5429186f0d3b257f50a065f702c63fe8e77c8d0396e4240cca32561a" exitCode=0 Nov 28 12:10:18 crc kubenswrapper[5030]: I1128 12:10:18.567157 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9" event={"ID":"3a579a17-723b-491c-8e33-ce15cb47f3f3","Type":"ContainerDied","Data":"e6dfc2ac5429186f0d3b257f50a065f702c63fe8e77c8d0396e4240cca32561a"} Nov 28 12:10:18 crc kubenswrapper[5030]: I1128 12:10:18.568734 5030 generic.go:334] "Generic (PLEG): container finished" podID="c245622d-f12e-4906-b7e5-180b9dc50229" containerID="daf3208436221dcb518e66afdbe3765adf19f2378b5bd682dbddcf960656e412" exitCode=0 Nov 28 12:10:18 crc kubenswrapper[5030]: I1128 12:10:18.568772 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-9w92v" event={"ID":"c245622d-f12e-4906-b7e5-180b9dc50229","Type":"ContainerDied","Data":"daf3208436221dcb518e66afdbe3765adf19f2378b5bd682dbddcf960656e412"} Nov 28 12:10:19 crc kubenswrapper[5030]: I1128 12:10:19.266428 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-index-js7dn"] Nov 28 12:10:19 crc kubenswrapper[5030]: I1128 12:10:19.268065 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-js7dn" Nov 28 12:10:19 crc kubenswrapper[5030]: I1128 12:10:19.272031 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-index-dockercfg-lgczn" Nov 28 12:10:19 crc kubenswrapper[5030]: I1128 12:10:19.294041 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-js7dn"] Nov 28 12:10:19 crc kubenswrapper[5030]: I1128 12:10:19.420397 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj4ww\" (UniqueName: \"kubernetes.io/projected/e3d7cbc8-3d46-4db9-a4b7-5b19f326b476-kube-api-access-dj4ww\") pod \"horizon-operator-index-js7dn\" (UID: \"e3d7cbc8-3d46-4db9-a4b7-5b19f326b476\") " pod="openstack-operators/horizon-operator-index-js7dn" Nov 28 12:10:19 crc kubenswrapper[5030]: I1128 12:10:19.521780 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj4ww\" (UniqueName: \"kubernetes.io/projected/e3d7cbc8-3d46-4db9-a4b7-5b19f326b476-kube-api-access-dj4ww\") pod \"horizon-operator-index-js7dn\" (UID: \"e3d7cbc8-3d46-4db9-a4b7-5b19f326b476\") " pod="openstack-operators/horizon-operator-index-js7dn" Nov 28 12:10:19 crc kubenswrapper[5030]: I1128 12:10:19.557968 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj4ww\" (UniqueName: \"kubernetes.io/projected/e3d7cbc8-3d46-4db9-a4b7-5b19f326b476-kube-api-access-dj4ww\") pod \"horizon-operator-index-js7dn\" (UID: \"e3d7cbc8-3d46-4db9-a4b7-5b19f326b476\") " pod="openstack-operators/horizon-operator-index-js7dn" Nov 28 12:10:19 crc kubenswrapper[5030]: I1128 12:10:19.591646 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-js7dn" Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.045860 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-9w92v" Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.051777 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9" Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.151930 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-js7dn"] Nov 28 12:10:20 crc kubenswrapper[5030]: W1128 12:10:20.158803 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3d7cbc8_3d46_4db9_a4b7_5b19f326b476.slice/crio-350c173991f84ce09fe04e18d2b75874af0733603f0e3fb6a12128111b97bc03 WatchSource:0}: Error finding container 350c173991f84ce09fe04e18d2b75874af0733603f0e3fb6a12128111b97bc03: Status 404 returned error can't find the container with id 350c173991f84ce09fe04e18d2b75874af0733603f0e3fb6a12128111b97bc03 Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.233546 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mnlz\" (UniqueName: \"kubernetes.io/projected/3a579a17-723b-491c-8e33-ce15cb47f3f3-kube-api-access-2mnlz\") pod \"3a579a17-723b-491c-8e33-ce15cb47f3f3\" (UID: \"3a579a17-723b-491c-8e33-ce15cb47f3f3\") " Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.233668 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c245622d-f12e-4906-b7e5-180b9dc50229-operator-scripts\") pod \"c245622d-f12e-4906-b7e5-180b9dc50229\" (UID: \"c245622d-f12e-4906-b7e5-180b9dc50229\") " Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.233766 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a579a17-723b-491c-8e33-ce15cb47f3f3-operator-scripts\") pod \"3a579a17-723b-491c-8e33-ce15cb47f3f3\" (UID: \"3a579a17-723b-491c-8e33-ce15cb47f3f3\") " Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.233926 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7ssl\" (UniqueName: \"kubernetes.io/projected/c245622d-f12e-4906-b7e5-180b9dc50229-kube-api-access-r7ssl\") pod \"c245622d-f12e-4906-b7e5-180b9dc50229\" (UID: \"c245622d-f12e-4906-b7e5-180b9dc50229\") " Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.234882 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c245622d-f12e-4906-b7e5-180b9dc50229-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c245622d-f12e-4906-b7e5-180b9dc50229" (UID: "c245622d-f12e-4906-b7e5-180b9dc50229"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.234884 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a579a17-723b-491c-8e33-ce15cb47f3f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3a579a17-723b-491c-8e33-ce15cb47f3f3" (UID: "3a579a17-723b-491c-8e33-ce15cb47f3f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.239951 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a579a17-723b-491c-8e33-ce15cb47f3f3-kube-api-access-2mnlz" (OuterVolumeSpecName: "kube-api-access-2mnlz") pod "3a579a17-723b-491c-8e33-ce15cb47f3f3" (UID: "3a579a17-723b-491c-8e33-ce15cb47f3f3"). InnerVolumeSpecName "kube-api-access-2mnlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.242043 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c245622d-f12e-4906-b7e5-180b9dc50229-kube-api-access-r7ssl" (OuterVolumeSpecName: "kube-api-access-r7ssl") pod "c245622d-f12e-4906-b7e5-180b9dc50229" (UID: "c245622d-f12e-4906-b7e5-180b9dc50229"). InnerVolumeSpecName "kube-api-access-r7ssl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.335917 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7ssl\" (UniqueName: \"kubernetes.io/projected/c245622d-f12e-4906-b7e5-180b9dc50229-kube-api-access-r7ssl\") on node \"crc\" DevicePath \"\"" Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.335972 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mnlz\" (UniqueName: \"kubernetes.io/projected/3a579a17-723b-491c-8e33-ce15cb47f3f3-kube-api-access-2mnlz\") on node \"crc\" DevicePath \"\"" Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.335992 5030 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c245622d-f12e-4906-b7e5-180b9dc50229-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.336011 5030 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a579a17-723b-491c-8e33-ce15cb47f3f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.591648 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-9w92v" Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.591679 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-9w92v" event={"ID":"c245622d-f12e-4906-b7e5-180b9dc50229","Type":"ContainerDied","Data":"a7ee68d0ac239c76861369544f8bde825f1efe19465e279572bccef5effda85c"} Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.591746 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7ee68d0ac239c76861369544f8bde825f1efe19465e279572bccef5effda85c" Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.594344 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-js7dn" event={"ID":"e3d7cbc8-3d46-4db9-a4b7-5b19f326b476","Type":"ContainerStarted","Data":"350c173991f84ce09fe04e18d2b75874af0733603f0e3fb6a12128111b97bc03"} Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.600883 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9" event={"ID":"3a579a17-723b-491c-8e33-ce15cb47f3f3","Type":"ContainerDied","Data":"b00cf32d2f51a3310abcbef3d6a32df0bc683d1e6ade7f60a8224c3feda4bc59"} Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.600930 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b00cf32d2f51a3310abcbef3d6a32df0bc683d1e6ade7f60a8224c3feda4bc59" Nov 28 12:10:20 crc kubenswrapper[5030]: I1128 12:10:20.600993 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9" Nov 28 12:10:21 crc kubenswrapper[5030]: I1128 12:10:21.612119 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-js7dn" event={"ID":"e3d7cbc8-3d46-4db9-a4b7-5b19f326b476","Type":"ContainerStarted","Data":"3a701fc52faf3210a1e1009076fc7b0e2631315138d00f5b080fee7691a04fba"} Nov 28 12:10:21 crc kubenswrapper[5030]: I1128 12:10:21.642346 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-index-js7dn" podStartSLOduration=1.765681396 podStartE2EDuration="2.642298939s" podCreationTimestamp="2025-11-28 12:10:19 +0000 UTC" firstStartedPulling="2025-11-28 12:10:20.161719404 +0000 UTC m=+1038.103462097" lastFinishedPulling="2025-11-28 12:10:21.038336917 +0000 UTC m=+1038.980079640" observedRunningTime="2025-11-28 12:10:21.636711777 +0000 UTC m=+1039.578454550" watchObservedRunningTime="2025-11-28 12:10:21.642298939 +0000 UTC m=+1039.584041672" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.041995 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-db-sync-tn92m"] Nov 28 12:10:22 crc kubenswrapper[5030]: E1128 12:10:22.042322 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c245622d-f12e-4906-b7e5-180b9dc50229" containerName="mariadb-database-create" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.042340 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="c245622d-f12e-4906-b7e5-180b9dc50229" containerName="mariadb-database-create" Nov 28 12:10:22 crc kubenswrapper[5030]: E1128 12:10:22.042367 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a579a17-723b-491c-8e33-ce15cb47f3f3" containerName="mariadb-account-create-update" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.042375 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a579a17-723b-491c-8e33-ce15cb47f3f3" containerName="mariadb-account-create-update" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.042525 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a579a17-723b-491c-8e33-ce15cb47f3f3" containerName="mariadb-account-create-update" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.042539 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="c245622d-f12e-4906-b7e5-180b9dc50229" containerName="mariadb-database-create" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.043019 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-tn92m" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.048528 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-scripts" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.048608 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-config-data" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.048833 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-keystone-dockercfg-kv8d5" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.048858 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.058085 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-tn92m"] Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.171003 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5-config-data\") pod \"keystone-db-sync-tn92m\" (UID: \"a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5\") " pod="glance-kuttl-tests/keystone-db-sync-tn92m" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.171087 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnlbz\" (UniqueName: \"kubernetes.io/projected/a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5-kube-api-access-jnlbz\") pod \"keystone-db-sync-tn92m\" (UID: \"a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5\") " pod="glance-kuttl-tests/keystone-db-sync-tn92m" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.272567 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnlbz\" (UniqueName: \"kubernetes.io/projected/a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5-kube-api-access-jnlbz\") pod \"keystone-db-sync-tn92m\" (UID: \"a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5\") " pod="glance-kuttl-tests/keystone-db-sync-tn92m" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.272871 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5-config-data\") pod \"keystone-db-sync-tn92m\" (UID: \"a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5\") " pod="glance-kuttl-tests/keystone-db-sync-tn92m" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.289722 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5-config-data\") pod \"keystone-db-sync-tn92m\" (UID: \"a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5\") " pod="glance-kuttl-tests/keystone-db-sync-tn92m" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.308269 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnlbz\" (UniqueName: \"kubernetes.io/projected/a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5-kube-api-access-jnlbz\") pod \"keystone-db-sync-tn92m\" (UID: \"a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5\") " pod="glance-kuttl-tests/keystone-db-sync-tn92m" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.364530 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-tn92m" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.463730 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-index-tjqmp"] Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.468292 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-tjqmp" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.472010 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-index-dockercfg-kkcdk" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.479844 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-tjqmp"] Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.577877 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j69gl\" (UniqueName: \"kubernetes.io/projected/038980bd-4bb4-4185-a294-4e3bed113154-kube-api-access-j69gl\") pod \"swift-operator-index-tjqmp\" (UID: \"038980bd-4bb4-4185-a294-4e3bed113154\") " pod="openstack-operators/swift-operator-index-tjqmp" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.667340 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-tn92m"] Nov 28 12:10:22 crc kubenswrapper[5030]: W1128 12:10:22.677072 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda14f92e4_8dc5_4fb4_8cf7_8ed25b79ebc5.slice/crio-7d92c6969d5f99242927104939bca37de3c6b7dbe8675af0c6f6d5ceaed93f03 WatchSource:0}: Error finding container 7d92c6969d5f99242927104939bca37de3c6b7dbe8675af0c6f6d5ceaed93f03: Status 404 returned error can't find the container with id 7d92c6969d5f99242927104939bca37de3c6b7dbe8675af0c6f6d5ceaed93f03 Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.679403 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j69gl\" (UniqueName: \"kubernetes.io/projected/038980bd-4bb4-4185-a294-4e3bed113154-kube-api-access-j69gl\") pod \"swift-operator-index-tjqmp\" (UID: \"038980bd-4bb4-4185-a294-4e3bed113154\") " pod="openstack-operators/swift-operator-index-tjqmp" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.704262 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j69gl\" (UniqueName: \"kubernetes.io/projected/038980bd-4bb4-4185-a294-4e3bed113154-kube-api-access-j69gl\") pod \"swift-operator-index-tjqmp\" (UID: \"038980bd-4bb4-4185-a294-4e3bed113154\") " pod="openstack-operators/swift-operator-index-tjqmp" Nov 28 12:10:22 crc kubenswrapper[5030]: I1128 12:10:22.788400 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-tjqmp" Nov 28 12:10:23 crc kubenswrapper[5030]: I1128 12:10:23.251602 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-tjqmp"] Nov 28 12:10:23 crc kubenswrapper[5030]: I1128 12:10:23.644518 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-tjqmp" event={"ID":"038980bd-4bb4-4185-a294-4e3bed113154","Type":"ContainerStarted","Data":"8434c1d540bb18768cc0de8b85a9220caac11d74d45cb0cabfdd463ecb9d77fb"} Nov 28 12:10:23 crc kubenswrapper[5030]: I1128 12:10:23.646938 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-tn92m" event={"ID":"a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5","Type":"ContainerStarted","Data":"7d92c6969d5f99242927104939bca37de3c6b7dbe8675af0c6f6d5ceaed93f03"} Nov 28 12:10:24 crc kubenswrapper[5030]: I1128 12:10:24.660230 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-tjqmp" event={"ID":"038980bd-4bb4-4185-a294-4e3bed113154","Type":"ContainerStarted","Data":"d8241da3134e23a832bfcbc4e80ab61502165644d93b0860d3d10290142a0a14"} Nov 28 12:10:27 crc kubenswrapper[5030]: I1128 12:10:27.650682 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-index-tjqmp" podStartSLOduration=4.658651187 podStartE2EDuration="5.650654201s" podCreationTimestamp="2025-11-28 12:10:22 +0000 UTC" firstStartedPulling="2025-11-28 12:10:23.267150013 +0000 UTC m=+1041.208892706" lastFinishedPulling="2025-11-28 12:10:24.259153037 +0000 UTC m=+1042.200895720" observedRunningTime="2025-11-28 12:10:24.680895891 +0000 UTC m=+1042.622638574" watchObservedRunningTime="2025-11-28 12:10:27.650654201 +0000 UTC m=+1045.592396884" Nov 28 12:10:27 crc kubenswrapper[5030]: I1128 12:10:27.653308 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-index-tjqmp"] Nov 28 12:10:27 crc kubenswrapper[5030]: I1128 12:10:27.653560 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/swift-operator-index-tjqmp" podUID="038980bd-4bb4-4185-a294-4e3bed113154" containerName="registry-server" containerID="cri-o://d8241da3134e23a832bfcbc4e80ab61502165644d93b0860d3d10290142a0a14" gracePeriod=2 Nov 28 12:10:28 crc kubenswrapper[5030]: I1128 12:10:28.268930 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-index-njjsd"] Nov 28 12:10:28 crc kubenswrapper[5030]: I1128 12:10:28.270256 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-njjsd" Nov 28 12:10:28 crc kubenswrapper[5030]: I1128 12:10:28.279536 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-njjsd"] Nov 28 12:10:28 crc kubenswrapper[5030]: I1128 12:10:28.376979 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf7mh\" (UniqueName: \"kubernetes.io/projected/e2cb884d-70c4-4134-a38f-866f4650a9bb-kube-api-access-vf7mh\") pod \"swift-operator-index-njjsd\" (UID: \"e2cb884d-70c4-4134-a38f-866f4650a9bb\") " pod="openstack-operators/swift-operator-index-njjsd" Nov 28 12:10:28 crc kubenswrapper[5030]: I1128 12:10:28.479817 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf7mh\" (UniqueName: \"kubernetes.io/projected/e2cb884d-70c4-4134-a38f-866f4650a9bb-kube-api-access-vf7mh\") pod \"swift-operator-index-njjsd\" (UID: \"e2cb884d-70c4-4134-a38f-866f4650a9bb\") " pod="openstack-operators/swift-operator-index-njjsd" Nov 28 12:10:28 crc kubenswrapper[5030]: I1128 12:10:28.501231 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf7mh\" (UniqueName: \"kubernetes.io/projected/e2cb884d-70c4-4134-a38f-866f4650a9bb-kube-api-access-vf7mh\") pod \"swift-operator-index-njjsd\" (UID: \"e2cb884d-70c4-4134-a38f-866f4650a9bb\") " pod="openstack-operators/swift-operator-index-njjsd" Nov 28 12:10:28 crc kubenswrapper[5030]: I1128 12:10:28.594916 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-njjsd" Nov 28 12:10:28 crc kubenswrapper[5030]: I1128 12:10:28.689112 5030 generic.go:334] "Generic (PLEG): container finished" podID="038980bd-4bb4-4185-a294-4e3bed113154" containerID="d8241da3134e23a832bfcbc4e80ab61502165644d93b0860d3d10290142a0a14" exitCode=0 Nov 28 12:10:28 crc kubenswrapper[5030]: I1128 12:10:28.689159 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-tjqmp" event={"ID":"038980bd-4bb4-4185-a294-4e3bed113154","Type":"ContainerDied","Data":"d8241da3134e23a832bfcbc4e80ab61502165644d93b0860d3d10290142a0a14"} Nov 28 12:10:29 crc kubenswrapper[5030]: I1128 12:10:29.592739 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/horizon-operator-index-js7dn" Nov 28 12:10:29 crc kubenswrapper[5030]: I1128 12:10:29.592936 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-index-js7dn" Nov 28 12:10:29 crc kubenswrapper[5030]: I1128 12:10:29.639611 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/horizon-operator-index-js7dn" Nov 28 12:10:29 crc kubenswrapper[5030]: I1128 12:10:29.742931 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-index-js7dn" Nov 28 12:10:32 crc kubenswrapper[5030]: I1128 12:10:32.789179 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-index-tjqmp" Nov 28 12:10:34 crc kubenswrapper[5030]: I1128 12:10:34.086519 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-tjqmp" Nov 28 12:10:34 crc kubenswrapper[5030]: I1128 12:10:34.191643 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j69gl\" (UniqueName: \"kubernetes.io/projected/038980bd-4bb4-4185-a294-4e3bed113154-kube-api-access-j69gl\") pod \"038980bd-4bb4-4185-a294-4e3bed113154\" (UID: \"038980bd-4bb4-4185-a294-4e3bed113154\") " Nov 28 12:10:34 crc kubenswrapper[5030]: I1128 12:10:34.197253 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/038980bd-4bb4-4185-a294-4e3bed113154-kube-api-access-j69gl" (OuterVolumeSpecName: "kube-api-access-j69gl") pod "038980bd-4bb4-4185-a294-4e3bed113154" (UID: "038980bd-4bb4-4185-a294-4e3bed113154"). InnerVolumeSpecName "kube-api-access-j69gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:10:34 crc kubenswrapper[5030]: I1128 12:10:34.294385 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j69gl\" (UniqueName: \"kubernetes.io/projected/038980bd-4bb4-4185-a294-4e3bed113154-kube-api-access-j69gl\") on node \"crc\" DevicePath \"\"" Nov 28 12:10:34 crc kubenswrapper[5030]: I1128 12:10:34.535117 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-njjsd"] Nov 28 12:10:34 crc kubenswrapper[5030]: I1128 12:10:34.546990 5030 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 12:10:34 crc kubenswrapper[5030]: I1128 12:10:34.750390 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-tjqmp" Nov 28 12:10:34 crc kubenswrapper[5030]: I1128 12:10:34.751001 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-tjqmp" event={"ID":"038980bd-4bb4-4185-a294-4e3bed113154","Type":"ContainerDied","Data":"8434c1d540bb18768cc0de8b85a9220caac11d74d45cb0cabfdd463ecb9d77fb"} Nov 28 12:10:34 crc kubenswrapper[5030]: I1128 12:10:34.751072 5030 scope.go:117] "RemoveContainer" containerID="d8241da3134e23a832bfcbc4e80ab61502165644d93b0860d3d10290142a0a14" Nov 28 12:10:34 crc kubenswrapper[5030]: I1128 12:10:34.753619 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-tn92m" event={"ID":"a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5","Type":"ContainerStarted","Data":"71687a7b6f937d3e01d783eb2448ed6ae33971b2af2304afc67a57484aaf3c4e"} Nov 28 12:10:34 crc kubenswrapper[5030]: I1128 12:10:34.758804 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-njjsd" event={"ID":"e2cb884d-70c4-4134-a38f-866f4650a9bb","Type":"ContainerStarted","Data":"42d1d92ad281b3a2b6b770b38cef3bde2fa07aceadd5d1ff744322193a9b8913"} Nov 28 12:10:34 crc kubenswrapper[5030]: I1128 12:10:34.782647 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-db-sync-tn92m" podStartSLOduration=1.338639076 podStartE2EDuration="12.782619736s" podCreationTimestamp="2025-11-28 12:10:22 +0000 UTC" firstStartedPulling="2025-11-28 12:10:22.681723344 +0000 UTC m=+1040.623466037" lastFinishedPulling="2025-11-28 12:10:34.125704014 +0000 UTC m=+1052.067446697" observedRunningTime="2025-11-28 12:10:34.772534924 +0000 UTC m=+1052.714277647" watchObservedRunningTime="2025-11-28 12:10:34.782619736 +0000 UTC m=+1052.724362449" Nov 28 12:10:34 crc kubenswrapper[5030]: I1128 12:10:34.810163 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-index-tjqmp"] Nov 28 12:10:34 crc kubenswrapper[5030]: I1128 12:10:34.817249 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/swift-operator-index-tjqmp"] Nov 28 12:10:35 crc kubenswrapper[5030]: I1128 12:10:35.771774 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-njjsd" event={"ID":"e2cb884d-70c4-4134-a38f-866f4650a9bb","Type":"ContainerStarted","Data":"b8a21f5fa910b6af3b3a771f897e9a954bbf7c9e8c4bd7bed4de02756c23ef4d"} Nov 28 12:10:35 crc kubenswrapper[5030]: I1128 12:10:35.796321 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-index-njjsd" podStartSLOduration=7.245399049 podStartE2EDuration="7.796295375s" podCreationTimestamp="2025-11-28 12:10:28 +0000 UTC" firstStartedPulling="2025-11-28 12:10:34.546606014 +0000 UTC m=+1052.488348737" lastFinishedPulling="2025-11-28 12:10:35.09750234 +0000 UTC m=+1053.039245063" observedRunningTime="2025-11-28 12:10:35.794171037 +0000 UTC m=+1053.735913730" watchObservedRunningTime="2025-11-28 12:10:35.796295375 +0000 UTC m=+1053.738038068" Nov 28 12:10:36 crc kubenswrapper[5030]: I1128 12:10:36.406791 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="038980bd-4bb4-4185-a294-4e3bed113154" path="/var/lib/kubelet/pods/038980bd-4bb4-4185-a294-4e3bed113154/volumes" Nov 28 12:10:38 crc kubenswrapper[5030]: I1128 12:10:38.596601 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-index-njjsd" Nov 28 12:10:38 crc kubenswrapper[5030]: I1128 12:10:38.597769 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/swift-operator-index-njjsd" Nov 28 12:10:38 crc kubenswrapper[5030]: I1128 12:10:38.633134 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/swift-operator-index-njjsd" Nov 28 12:10:39 crc kubenswrapper[5030]: I1128 12:10:39.811780 5030 generic.go:334] "Generic (PLEG): container finished" podID="a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5" containerID="71687a7b6f937d3e01d783eb2448ed6ae33971b2af2304afc67a57484aaf3c4e" exitCode=0 Nov 28 12:10:39 crc kubenswrapper[5030]: I1128 12:10:39.811844 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-tn92m" event={"ID":"a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5","Type":"ContainerDied","Data":"71687a7b6f937d3e01d783eb2448ed6ae33971b2af2304afc67a57484aaf3c4e"} Nov 28 12:10:41 crc kubenswrapper[5030]: I1128 12:10:41.238482 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-tn92m" Nov 28 12:10:41 crc kubenswrapper[5030]: I1128 12:10:41.436996 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5-config-data\") pod \"a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5\" (UID: \"a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5\") " Nov 28 12:10:41 crc kubenswrapper[5030]: I1128 12:10:41.437094 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnlbz\" (UniqueName: \"kubernetes.io/projected/a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5-kube-api-access-jnlbz\") pod \"a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5\" (UID: \"a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5\") " Nov 28 12:10:41 crc kubenswrapper[5030]: I1128 12:10:41.448064 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5-kube-api-access-jnlbz" (OuterVolumeSpecName: "kube-api-access-jnlbz") pod "a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5" (UID: "a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5"). InnerVolumeSpecName "kube-api-access-jnlbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:10:41 crc kubenswrapper[5030]: I1128 12:10:41.498769 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5-config-data" (OuterVolumeSpecName: "config-data") pod "a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5" (UID: "a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:10:41 crc kubenswrapper[5030]: I1128 12:10:41.539007 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:10:41 crc kubenswrapper[5030]: I1128 12:10:41.539103 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnlbz\" (UniqueName: \"kubernetes.io/projected/a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5-kube-api-access-jnlbz\") on node \"crc\" DevicePath \"\"" Nov 28 12:10:41 crc kubenswrapper[5030]: I1128 12:10:41.833036 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-tn92m" event={"ID":"a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5","Type":"ContainerDied","Data":"7d92c6969d5f99242927104939bca37de3c6b7dbe8675af0c6f6d5ceaed93f03"} Nov 28 12:10:41 crc kubenswrapper[5030]: I1128 12:10:41.833099 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d92c6969d5f99242927104939bca37de3c6b7dbe8675af0c6f6d5ceaed93f03" Nov 28 12:10:41 crc kubenswrapper[5030]: I1128 12:10:41.833098 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-tn92m" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.059055 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-bwvll"] Nov 28 12:10:42 crc kubenswrapper[5030]: E1128 12:10:42.059657 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="038980bd-4bb4-4185-a294-4e3bed113154" containerName="registry-server" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.059690 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="038980bd-4bb4-4185-a294-4e3bed113154" containerName="registry-server" Nov 28 12:10:42 crc kubenswrapper[5030]: E1128 12:10:42.059732 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5" containerName="keystone-db-sync" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.059747 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5" containerName="keystone-db-sync" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.060020 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="038980bd-4bb4-4185-a294-4e3bed113154" containerName="registry-server" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.060060 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5" containerName="keystone-db-sync" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.060962 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-bwvll" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.065634 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.065842 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-scripts" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.066072 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"osp-secret" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.066530 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-keystone-dockercfg-kv8d5" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.066728 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-config-data" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.072437 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-bwvll"] Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.148920 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qgrw\" (UniqueName: \"kubernetes.io/projected/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-kube-api-access-2qgrw\") pod \"keystone-bootstrap-bwvll\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " pod="glance-kuttl-tests/keystone-bootstrap-bwvll" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.149009 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-credential-keys\") pod \"keystone-bootstrap-bwvll\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " pod="glance-kuttl-tests/keystone-bootstrap-bwvll" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.149048 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-scripts\") pod \"keystone-bootstrap-bwvll\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " pod="glance-kuttl-tests/keystone-bootstrap-bwvll" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.149120 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-config-data\") pod \"keystone-bootstrap-bwvll\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " pod="glance-kuttl-tests/keystone-bootstrap-bwvll" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.149162 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-fernet-keys\") pod \"keystone-bootstrap-bwvll\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " pod="glance-kuttl-tests/keystone-bootstrap-bwvll" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.250235 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-scripts\") pod \"keystone-bootstrap-bwvll\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " pod="glance-kuttl-tests/keystone-bootstrap-bwvll" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.250361 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-config-data\") pod \"keystone-bootstrap-bwvll\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " pod="glance-kuttl-tests/keystone-bootstrap-bwvll" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.250428 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-fernet-keys\") pod \"keystone-bootstrap-bwvll\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " pod="glance-kuttl-tests/keystone-bootstrap-bwvll" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.250654 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qgrw\" (UniqueName: \"kubernetes.io/projected/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-kube-api-access-2qgrw\") pod \"keystone-bootstrap-bwvll\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " pod="glance-kuttl-tests/keystone-bootstrap-bwvll" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.250705 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-credential-keys\") pod \"keystone-bootstrap-bwvll\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " pod="glance-kuttl-tests/keystone-bootstrap-bwvll" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.255529 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-fernet-keys\") pod \"keystone-bootstrap-bwvll\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " pod="glance-kuttl-tests/keystone-bootstrap-bwvll" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.256159 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-config-data\") pod \"keystone-bootstrap-bwvll\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " pod="glance-kuttl-tests/keystone-bootstrap-bwvll" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.256944 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-credential-keys\") pod \"keystone-bootstrap-bwvll\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " pod="glance-kuttl-tests/keystone-bootstrap-bwvll" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.257847 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-scripts\") pod \"keystone-bootstrap-bwvll\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " pod="glance-kuttl-tests/keystone-bootstrap-bwvll" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.272513 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qgrw\" (UniqueName: \"kubernetes.io/projected/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-kube-api-access-2qgrw\") pod \"keystone-bootstrap-bwvll\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " pod="glance-kuttl-tests/keystone-bootstrap-bwvll" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.385969 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-bwvll" Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.617535 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-bwvll"] Nov 28 12:10:42 crc kubenswrapper[5030]: W1128 12:10:42.624649 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod482f7ca1_8b55_4a4d_8a78_fe296e1801c0.slice/crio-e985c329fa05cf66bba7acd3e7e4e9a6e0c27f975419e28e08f021b69c7a5531 WatchSource:0}: Error finding container e985c329fa05cf66bba7acd3e7e4e9a6e0c27f975419e28e08f021b69c7a5531: Status 404 returned error can't find the container with id e985c329fa05cf66bba7acd3e7e4e9a6e0c27f975419e28e08f021b69c7a5531 Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.851717 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-bwvll" event={"ID":"482f7ca1-8b55-4a4d-8a78-fe296e1801c0","Type":"ContainerStarted","Data":"6ecb90ac5a53babfe41221a56583aee9b7636c3da4eef4e5c69c02da4c2972a8"} Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.851831 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-bwvll" event={"ID":"482f7ca1-8b55-4a4d-8a78-fe296e1801c0","Type":"ContainerStarted","Data":"e985c329fa05cf66bba7acd3e7e4e9a6e0c27f975419e28e08f021b69c7a5531"} Nov 28 12:10:42 crc kubenswrapper[5030]: I1128 12:10:42.875030 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-bootstrap-bwvll" podStartSLOduration=0.874997158 podStartE2EDuration="874.997158ms" podCreationTimestamp="2025-11-28 12:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:10:42.873970321 +0000 UTC m=+1060.815713014" watchObservedRunningTime="2025-11-28 12:10:42.874997158 +0000 UTC m=+1060.816739871" Nov 28 12:10:45 crc kubenswrapper[5030]: I1128 12:10:45.900514 5030 generic.go:334] "Generic (PLEG): container finished" podID="482f7ca1-8b55-4a4d-8a78-fe296e1801c0" containerID="6ecb90ac5a53babfe41221a56583aee9b7636c3da4eef4e5c69c02da4c2972a8" exitCode=0 Nov 28 12:10:45 crc kubenswrapper[5030]: I1128 12:10:45.901028 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-bwvll" event={"ID":"482f7ca1-8b55-4a4d-8a78-fe296e1801c0","Type":"ContainerDied","Data":"6ecb90ac5a53babfe41221a56583aee9b7636c3da4eef4e5c69c02da4c2972a8"} Nov 28 12:10:47 crc kubenswrapper[5030]: I1128 12:10:47.321540 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-bwvll" Nov 28 12:10:47 crc kubenswrapper[5030]: I1128 12:10:47.449299 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-scripts\") pod \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " Nov 28 12:10:47 crc kubenswrapper[5030]: I1128 12:10:47.449524 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-credential-keys\") pod \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " Nov 28 12:10:47 crc kubenswrapper[5030]: I1128 12:10:47.449629 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-config-data\") pod \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " Nov 28 12:10:47 crc kubenswrapper[5030]: I1128 12:10:47.449696 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qgrw\" (UniqueName: \"kubernetes.io/projected/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-kube-api-access-2qgrw\") pod \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " Nov 28 12:10:47 crc kubenswrapper[5030]: I1128 12:10:47.449760 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-fernet-keys\") pod \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\" (UID: \"482f7ca1-8b55-4a4d-8a78-fe296e1801c0\") " Nov 28 12:10:47 crc kubenswrapper[5030]: I1128 12:10:47.457128 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-scripts" (OuterVolumeSpecName: "scripts") pod "482f7ca1-8b55-4a4d-8a78-fe296e1801c0" (UID: "482f7ca1-8b55-4a4d-8a78-fe296e1801c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:10:47 crc kubenswrapper[5030]: I1128 12:10:47.458716 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-kube-api-access-2qgrw" (OuterVolumeSpecName: "kube-api-access-2qgrw") pod "482f7ca1-8b55-4a4d-8a78-fe296e1801c0" (UID: "482f7ca1-8b55-4a4d-8a78-fe296e1801c0"). InnerVolumeSpecName "kube-api-access-2qgrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:10:47 crc kubenswrapper[5030]: I1128 12:10:47.459714 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "482f7ca1-8b55-4a4d-8a78-fe296e1801c0" (UID: "482f7ca1-8b55-4a4d-8a78-fe296e1801c0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:10:47 crc kubenswrapper[5030]: I1128 12:10:47.464310 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "482f7ca1-8b55-4a4d-8a78-fe296e1801c0" (UID: "482f7ca1-8b55-4a4d-8a78-fe296e1801c0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:10:47 crc kubenswrapper[5030]: I1128 12:10:47.483756 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-config-data" (OuterVolumeSpecName: "config-data") pod "482f7ca1-8b55-4a4d-8a78-fe296e1801c0" (UID: "482f7ca1-8b55-4a4d-8a78-fe296e1801c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:10:47 crc kubenswrapper[5030]: I1128 12:10:47.551988 5030 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 12:10:47 crc kubenswrapper[5030]: I1128 12:10:47.552023 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:10:47 crc kubenswrapper[5030]: I1128 12:10:47.552032 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qgrw\" (UniqueName: \"kubernetes.io/projected/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-kube-api-access-2qgrw\") on node \"crc\" DevicePath \"\"" Nov 28 12:10:47 crc kubenswrapper[5030]: I1128 12:10:47.552043 5030 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 12:10:47 crc kubenswrapper[5030]: I1128 12:10:47.552090 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/482f7ca1-8b55-4a4d-8a78-fe296e1801c0-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:10:47 crc kubenswrapper[5030]: I1128 12:10:47.924573 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-bwvll" event={"ID":"482f7ca1-8b55-4a4d-8a78-fe296e1801c0","Type":"ContainerDied","Data":"e985c329fa05cf66bba7acd3e7e4e9a6e0c27f975419e28e08f021b69c7a5531"} Nov 28 12:10:47 crc kubenswrapper[5030]: I1128 12:10:47.924641 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e985c329fa05cf66bba7acd3e7e4e9a6e0c27f975419e28e08f021b69c7a5531" Nov 28 12:10:47 crc kubenswrapper[5030]: I1128 12:10:47.924682 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-bwvll" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.023327 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-5854d7bc86-t2mhb"] Nov 28 12:10:48 crc kubenswrapper[5030]: E1128 12:10:48.024058 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="482f7ca1-8b55-4a4d-8a78-fe296e1801c0" containerName="keystone-bootstrap" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.024076 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="482f7ca1-8b55-4a4d-8a78-fe296e1801c0" containerName="keystone-bootstrap" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.024258 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="482f7ca1-8b55-4a4d-8a78-fe296e1801c0" containerName="keystone-bootstrap" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.024894 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.028067 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-scripts" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.028195 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.028772 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-keystone-dockercfg-kv8d5" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.028899 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-config-data" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.037405 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-5854d7bc86-t2mhb"] Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.061555 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e76b8eea-d098-4f8b-9048-991ad0e4c1da-credential-keys\") pod \"keystone-5854d7bc86-t2mhb\" (UID: \"e76b8eea-d098-4f8b-9048-991ad0e4c1da\") " pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.061630 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e76b8eea-d098-4f8b-9048-991ad0e4c1da-scripts\") pod \"keystone-5854d7bc86-t2mhb\" (UID: \"e76b8eea-d098-4f8b-9048-991ad0e4c1da\") " pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.061716 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e76b8eea-d098-4f8b-9048-991ad0e4c1da-fernet-keys\") pod \"keystone-5854d7bc86-t2mhb\" (UID: \"e76b8eea-d098-4f8b-9048-991ad0e4c1da\") " pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.061778 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76b8eea-d098-4f8b-9048-991ad0e4c1da-config-data\") pod \"keystone-5854d7bc86-t2mhb\" (UID: \"e76b8eea-d098-4f8b-9048-991ad0e4c1da\") " pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.061903 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krwkg\" (UniqueName: \"kubernetes.io/projected/e76b8eea-d098-4f8b-9048-991ad0e4c1da-kube-api-access-krwkg\") pod \"keystone-5854d7bc86-t2mhb\" (UID: \"e76b8eea-d098-4f8b-9048-991ad0e4c1da\") " pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.164017 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e76b8eea-d098-4f8b-9048-991ad0e4c1da-credential-keys\") pod \"keystone-5854d7bc86-t2mhb\" (UID: \"e76b8eea-d098-4f8b-9048-991ad0e4c1da\") " pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.164107 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e76b8eea-d098-4f8b-9048-991ad0e4c1da-scripts\") pod \"keystone-5854d7bc86-t2mhb\" (UID: \"e76b8eea-d098-4f8b-9048-991ad0e4c1da\") " pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.164174 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e76b8eea-d098-4f8b-9048-991ad0e4c1da-fernet-keys\") pod \"keystone-5854d7bc86-t2mhb\" (UID: \"e76b8eea-d098-4f8b-9048-991ad0e4c1da\") " pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.164256 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76b8eea-d098-4f8b-9048-991ad0e4c1da-config-data\") pod \"keystone-5854d7bc86-t2mhb\" (UID: \"e76b8eea-d098-4f8b-9048-991ad0e4c1da\") " pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.164300 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krwkg\" (UniqueName: \"kubernetes.io/projected/e76b8eea-d098-4f8b-9048-991ad0e4c1da-kube-api-access-krwkg\") pod \"keystone-5854d7bc86-t2mhb\" (UID: \"e76b8eea-d098-4f8b-9048-991ad0e4c1da\") " pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.170060 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e76b8eea-d098-4f8b-9048-991ad0e4c1da-scripts\") pod \"keystone-5854d7bc86-t2mhb\" (UID: \"e76b8eea-d098-4f8b-9048-991ad0e4c1da\") " pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.170143 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e76b8eea-d098-4f8b-9048-991ad0e4c1da-credential-keys\") pod \"keystone-5854d7bc86-t2mhb\" (UID: \"e76b8eea-d098-4f8b-9048-991ad0e4c1da\") " pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.170986 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76b8eea-d098-4f8b-9048-991ad0e4c1da-config-data\") pod \"keystone-5854d7bc86-t2mhb\" (UID: \"e76b8eea-d098-4f8b-9048-991ad0e4c1da\") " pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.185582 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e76b8eea-d098-4f8b-9048-991ad0e4c1da-fernet-keys\") pod \"keystone-5854d7bc86-t2mhb\" (UID: \"e76b8eea-d098-4f8b-9048-991ad0e4c1da\") " pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.190433 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krwkg\" (UniqueName: \"kubernetes.io/projected/e76b8eea-d098-4f8b-9048-991ad0e4c1da-kube-api-access-krwkg\") pod \"keystone-5854d7bc86-t2mhb\" (UID: \"e76b8eea-d098-4f8b-9048-991ad0e4c1da\") " pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.347004 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.618189 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-5854d7bc86-t2mhb"] Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.643408 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-index-njjsd" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.934806 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" event={"ID":"e76b8eea-d098-4f8b-9048-991ad0e4c1da","Type":"ContainerStarted","Data":"55e024b94a6c29377ee51c5745ba491337d73aaf749c3e1d831d74c78e22d499"} Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.935545 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.935575 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" event={"ID":"e76b8eea-d098-4f8b-9048-991ad0e4c1da","Type":"ContainerStarted","Data":"13a95ae21562a061fec861345cc5e2c2d6e6410211bda5256c96748222ccdb7f"} Nov 28 12:10:48 crc kubenswrapper[5030]: I1128 12:10:48.968166 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" podStartSLOduration=0.968140464 podStartE2EDuration="968.140464ms" podCreationTimestamp="2025-11-28 12:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:10:48.962284006 +0000 UTC m=+1066.904026729" watchObservedRunningTime="2025-11-28 12:10:48.968140464 +0000 UTC m=+1066.909883157" Nov 28 12:10:57 crc kubenswrapper[5030]: I1128 12:10:57.563422 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r"] Nov 28 12:10:57 crc kubenswrapper[5030]: I1128 12:10:57.567827 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r" Nov 28 12:10:57 crc kubenswrapper[5030]: I1128 12:10:57.573792 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-br5mr" Nov 28 12:10:57 crc kubenswrapper[5030]: I1128 12:10:57.577514 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r"] Nov 28 12:10:57 crc kubenswrapper[5030]: I1128 12:10:57.630723 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr48s\" (UniqueName: \"kubernetes.io/projected/2da742ad-42c7-4812-b7ee-04df6e644c0e-kube-api-access-dr48s\") pod \"9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r\" (UID: \"2da742ad-42c7-4812-b7ee-04df6e644c0e\") " pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r" Nov 28 12:10:57 crc kubenswrapper[5030]: I1128 12:10:57.630877 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2da742ad-42c7-4812-b7ee-04df6e644c0e-bundle\") pod \"9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r\" (UID: \"2da742ad-42c7-4812-b7ee-04df6e644c0e\") " pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r" Nov 28 12:10:57 crc kubenswrapper[5030]: I1128 12:10:57.631025 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2da742ad-42c7-4812-b7ee-04df6e644c0e-util\") pod \"9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r\" (UID: \"2da742ad-42c7-4812-b7ee-04df6e644c0e\") " pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r" Nov 28 12:10:57 crc kubenswrapper[5030]: I1128 12:10:57.733207 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2da742ad-42c7-4812-b7ee-04df6e644c0e-bundle\") pod \"9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r\" (UID: \"2da742ad-42c7-4812-b7ee-04df6e644c0e\") " pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r" Nov 28 12:10:57 crc kubenswrapper[5030]: I1128 12:10:57.733335 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2da742ad-42c7-4812-b7ee-04df6e644c0e-util\") pod \"9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r\" (UID: \"2da742ad-42c7-4812-b7ee-04df6e644c0e\") " pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r" Nov 28 12:10:57 crc kubenswrapper[5030]: I1128 12:10:57.733400 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr48s\" (UniqueName: \"kubernetes.io/projected/2da742ad-42c7-4812-b7ee-04df6e644c0e-kube-api-access-dr48s\") pod \"9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r\" (UID: \"2da742ad-42c7-4812-b7ee-04df6e644c0e\") " pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r" Nov 28 12:10:57 crc kubenswrapper[5030]: I1128 12:10:57.734553 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2da742ad-42c7-4812-b7ee-04df6e644c0e-bundle\") pod \"9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r\" (UID: \"2da742ad-42c7-4812-b7ee-04df6e644c0e\") " pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r" Nov 28 12:10:57 crc kubenswrapper[5030]: I1128 12:10:57.734757 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2da742ad-42c7-4812-b7ee-04df6e644c0e-util\") pod \"9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r\" (UID: \"2da742ad-42c7-4812-b7ee-04df6e644c0e\") " pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r" Nov 28 12:10:57 crc kubenswrapper[5030]: I1128 12:10:57.776029 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr48s\" (UniqueName: \"kubernetes.io/projected/2da742ad-42c7-4812-b7ee-04df6e644c0e-kube-api-access-dr48s\") pod \"9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r\" (UID: \"2da742ad-42c7-4812-b7ee-04df6e644c0e\") " pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r" Nov 28 12:10:57 crc kubenswrapper[5030]: I1128 12:10:57.896553 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r" Nov 28 12:10:58 crc kubenswrapper[5030]: I1128 12:10:58.233614 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r"] Nov 28 12:10:58 crc kubenswrapper[5030]: I1128 12:10:58.511416 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb"] Nov 28 12:10:58 crc kubenswrapper[5030]: I1128 12:10:58.512933 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb" Nov 28 12:10:58 crc kubenswrapper[5030]: I1128 12:10:58.536266 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb"] Nov 28 12:10:58 crc kubenswrapper[5030]: I1128 12:10:58.655658 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/927808a1-7261-4ddb-961f-302a544cb77c-bundle\") pod \"87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb\" (UID: \"927808a1-7261-4ddb-961f-302a544cb77c\") " pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb" Nov 28 12:10:58 crc kubenswrapper[5030]: I1128 12:10:58.656151 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrzwq\" (UniqueName: \"kubernetes.io/projected/927808a1-7261-4ddb-961f-302a544cb77c-kube-api-access-xrzwq\") pod \"87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb\" (UID: \"927808a1-7261-4ddb-961f-302a544cb77c\") " pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb" Nov 28 12:10:58 crc kubenswrapper[5030]: I1128 12:10:58.656182 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/927808a1-7261-4ddb-961f-302a544cb77c-util\") pod \"87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb\" (UID: \"927808a1-7261-4ddb-961f-302a544cb77c\") " pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb" Nov 28 12:10:58 crc kubenswrapper[5030]: I1128 12:10:58.757774 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/927808a1-7261-4ddb-961f-302a544cb77c-bundle\") pod \"87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb\" (UID: \"927808a1-7261-4ddb-961f-302a544cb77c\") " pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb" Nov 28 12:10:58 crc kubenswrapper[5030]: I1128 12:10:58.757848 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/927808a1-7261-4ddb-961f-302a544cb77c-util\") pod \"87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb\" (UID: \"927808a1-7261-4ddb-961f-302a544cb77c\") " pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb" Nov 28 12:10:58 crc kubenswrapper[5030]: I1128 12:10:58.757870 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrzwq\" (UniqueName: \"kubernetes.io/projected/927808a1-7261-4ddb-961f-302a544cb77c-kube-api-access-xrzwq\") pod \"87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb\" (UID: \"927808a1-7261-4ddb-961f-302a544cb77c\") " pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb" Nov 28 12:10:58 crc kubenswrapper[5030]: I1128 12:10:58.758565 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/927808a1-7261-4ddb-961f-302a544cb77c-bundle\") pod \"87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb\" (UID: \"927808a1-7261-4ddb-961f-302a544cb77c\") " pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb" Nov 28 12:10:58 crc kubenswrapper[5030]: I1128 12:10:58.758736 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/927808a1-7261-4ddb-961f-302a544cb77c-util\") pod \"87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb\" (UID: \"927808a1-7261-4ddb-961f-302a544cb77c\") " pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb" Nov 28 12:10:58 crc kubenswrapper[5030]: I1128 12:10:58.796025 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrzwq\" (UniqueName: \"kubernetes.io/projected/927808a1-7261-4ddb-961f-302a544cb77c-kube-api-access-xrzwq\") pod \"87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb\" (UID: \"927808a1-7261-4ddb-961f-302a544cb77c\") " pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb" Nov 28 12:10:58 crc kubenswrapper[5030]: I1128 12:10:58.832444 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb" Nov 28 12:10:59 crc kubenswrapper[5030]: I1128 12:10:59.031560 5030 generic.go:334] "Generic (PLEG): container finished" podID="2da742ad-42c7-4812-b7ee-04df6e644c0e" containerID="209b7b682f92f90032b7b5b84e382ee61d366c35e6ba157f9cef43828e3d8401" exitCode=0 Nov 28 12:10:59 crc kubenswrapper[5030]: I1128 12:10:59.031693 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r" event={"ID":"2da742ad-42c7-4812-b7ee-04df6e644c0e","Type":"ContainerDied","Data":"209b7b682f92f90032b7b5b84e382ee61d366c35e6ba157f9cef43828e3d8401"} Nov 28 12:10:59 crc kubenswrapper[5030]: I1128 12:10:59.032074 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r" event={"ID":"2da742ad-42c7-4812-b7ee-04df6e644c0e","Type":"ContainerStarted","Data":"8c936877325dce49d26453b7628e52ac71c9ebb99463dfc2fd986b19373cb264"} Nov 28 12:10:59 crc kubenswrapper[5030]: I1128 12:10:59.142010 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb"] Nov 28 12:10:59 crc kubenswrapper[5030]: W1128 12:10:59.151130 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod927808a1_7261_4ddb_961f_302a544cb77c.slice/crio-e09e18813ee6c3834956a179f1d16990bcc584d5e73b4cb8fdbfaa8c543ef00e WatchSource:0}: Error finding container e09e18813ee6c3834956a179f1d16990bcc584d5e73b4cb8fdbfaa8c543ef00e: Status 404 returned error can't find the container with id e09e18813ee6c3834956a179f1d16990bcc584d5e73b4cb8fdbfaa8c543ef00e Nov 28 12:11:00 crc kubenswrapper[5030]: I1128 12:11:00.054410 5030 generic.go:334] "Generic (PLEG): container finished" podID="927808a1-7261-4ddb-961f-302a544cb77c" containerID="7869bc7424dcfa253c398fbea552450b8ab2cea6ed8b442269b5f0f3b407b94e" exitCode=0 Nov 28 12:11:00 crc kubenswrapper[5030]: I1128 12:11:00.054596 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb" event={"ID":"927808a1-7261-4ddb-961f-302a544cb77c","Type":"ContainerDied","Data":"7869bc7424dcfa253c398fbea552450b8ab2cea6ed8b442269b5f0f3b407b94e"} Nov 28 12:11:00 crc kubenswrapper[5030]: I1128 12:11:00.055071 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb" event={"ID":"927808a1-7261-4ddb-961f-302a544cb77c","Type":"ContainerStarted","Data":"e09e18813ee6c3834956a179f1d16990bcc584d5e73b4cb8fdbfaa8c543ef00e"} Nov 28 12:11:01 crc kubenswrapper[5030]: I1128 12:11:01.069490 5030 generic.go:334] "Generic (PLEG): container finished" podID="2da742ad-42c7-4812-b7ee-04df6e644c0e" containerID="1fb7f74be16663bef152aa24d6e60ad5fdee233a7c88fca7e2c6f2f577ce2f0d" exitCode=0 Nov 28 12:11:01 crc kubenswrapper[5030]: I1128 12:11:01.069568 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r" event={"ID":"2da742ad-42c7-4812-b7ee-04df6e644c0e","Type":"ContainerDied","Data":"1fb7f74be16663bef152aa24d6e60ad5fdee233a7c88fca7e2c6f2f577ce2f0d"} Nov 28 12:11:02 crc kubenswrapper[5030]: I1128 12:11:02.085021 5030 generic.go:334] "Generic (PLEG): container finished" podID="2da742ad-42c7-4812-b7ee-04df6e644c0e" containerID="959a260027ff0d21c0fee8539bef9f9b5dc4343d710cf69a1738726a193978d4" exitCode=0 Nov 28 12:11:02 crc kubenswrapper[5030]: I1128 12:11:02.085109 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r" event={"ID":"2da742ad-42c7-4812-b7ee-04df6e644c0e","Type":"ContainerDied","Data":"959a260027ff0d21c0fee8539bef9f9b5dc4343d710cf69a1738726a193978d4"} Nov 28 12:11:02 crc kubenswrapper[5030]: I1128 12:11:02.088848 5030 generic.go:334] "Generic (PLEG): container finished" podID="927808a1-7261-4ddb-961f-302a544cb77c" containerID="01833beef2049cb91a2cc053df44ba8e5c51a01c230ca5355e3533355b35cfc1" exitCode=0 Nov 28 12:11:02 crc kubenswrapper[5030]: I1128 12:11:02.088921 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb" event={"ID":"927808a1-7261-4ddb-961f-302a544cb77c","Type":"ContainerDied","Data":"01833beef2049cb91a2cc053df44ba8e5c51a01c230ca5355e3533355b35cfc1"} Nov 28 12:11:03 crc kubenswrapper[5030]: I1128 12:11:03.202068 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:11:03 crc kubenswrapper[5030]: I1128 12:11:03.202613 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:11:04 crc kubenswrapper[5030]: I1128 12:11:04.483974 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r" Nov 28 12:11:04 crc kubenswrapper[5030]: I1128 12:11:04.678739 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr48s\" (UniqueName: \"kubernetes.io/projected/2da742ad-42c7-4812-b7ee-04df6e644c0e-kube-api-access-dr48s\") pod \"2da742ad-42c7-4812-b7ee-04df6e644c0e\" (UID: \"2da742ad-42c7-4812-b7ee-04df6e644c0e\") " Nov 28 12:11:04 crc kubenswrapper[5030]: I1128 12:11:04.679617 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2da742ad-42c7-4812-b7ee-04df6e644c0e-util\") pod \"2da742ad-42c7-4812-b7ee-04df6e644c0e\" (UID: \"2da742ad-42c7-4812-b7ee-04df6e644c0e\") " Nov 28 12:11:04 crc kubenswrapper[5030]: I1128 12:11:04.679745 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2da742ad-42c7-4812-b7ee-04df6e644c0e-bundle\") pod \"2da742ad-42c7-4812-b7ee-04df6e644c0e\" (UID: \"2da742ad-42c7-4812-b7ee-04df6e644c0e\") " Nov 28 12:11:04 crc kubenswrapper[5030]: I1128 12:11:04.681578 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2da742ad-42c7-4812-b7ee-04df6e644c0e-bundle" (OuterVolumeSpecName: "bundle") pod "2da742ad-42c7-4812-b7ee-04df6e644c0e" (UID: "2da742ad-42c7-4812-b7ee-04df6e644c0e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:11:04 crc kubenswrapper[5030]: I1128 12:11:04.692900 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da742ad-42c7-4812-b7ee-04df6e644c0e-kube-api-access-dr48s" (OuterVolumeSpecName: "kube-api-access-dr48s") pod "2da742ad-42c7-4812-b7ee-04df6e644c0e" (UID: "2da742ad-42c7-4812-b7ee-04df6e644c0e"). InnerVolumeSpecName "kube-api-access-dr48s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:11:04 crc kubenswrapper[5030]: I1128 12:11:04.693738 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2da742ad-42c7-4812-b7ee-04df6e644c0e-util" (OuterVolumeSpecName: "util") pod "2da742ad-42c7-4812-b7ee-04df6e644c0e" (UID: "2da742ad-42c7-4812-b7ee-04df6e644c0e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:11:04 crc kubenswrapper[5030]: I1128 12:11:04.782391 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr48s\" (UniqueName: \"kubernetes.io/projected/2da742ad-42c7-4812-b7ee-04df6e644c0e-kube-api-access-dr48s\") on node \"crc\" DevicePath \"\"" Nov 28 12:11:04 crc kubenswrapper[5030]: I1128 12:11:04.782434 5030 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2da742ad-42c7-4812-b7ee-04df6e644c0e-util\") on node \"crc\" DevicePath \"\"" Nov 28 12:11:04 crc kubenswrapper[5030]: I1128 12:11:04.782445 5030 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2da742ad-42c7-4812-b7ee-04df6e644c0e-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:11:05 crc kubenswrapper[5030]: I1128 12:11:05.133035 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r" event={"ID":"2da742ad-42c7-4812-b7ee-04df6e644c0e","Type":"ContainerDied","Data":"8c936877325dce49d26453b7628e52ac71c9ebb99463dfc2fd986b19373cb264"} Nov 28 12:11:05 crc kubenswrapper[5030]: I1128 12:11:05.133361 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c936877325dce49d26453b7628e52ac71c9ebb99463dfc2fd986b19373cb264" Nov 28 12:11:05 crc kubenswrapper[5030]: I1128 12:11:05.133058 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r" Nov 28 12:11:05 crc kubenswrapper[5030]: I1128 12:11:05.137159 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb" event={"ID":"927808a1-7261-4ddb-961f-302a544cb77c","Type":"ContainerStarted","Data":"8e4531112d19198847eea43df58e092cd16cf1ba0bc8a7a4869020448665b3cd"} Nov 28 12:11:06 crc kubenswrapper[5030]: I1128 12:11:06.150982 5030 generic.go:334] "Generic (PLEG): container finished" podID="927808a1-7261-4ddb-961f-302a544cb77c" containerID="8e4531112d19198847eea43df58e092cd16cf1ba0bc8a7a4869020448665b3cd" exitCode=0 Nov 28 12:11:06 crc kubenswrapper[5030]: I1128 12:11:06.151075 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb" event={"ID":"927808a1-7261-4ddb-961f-302a544cb77c","Type":"ContainerDied","Data":"8e4531112d19198847eea43df58e092cd16cf1ba0bc8a7a4869020448665b3cd"} Nov 28 12:11:07 crc kubenswrapper[5030]: I1128 12:11:07.556956 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb" Nov 28 12:11:07 crc kubenswrapper[5030]: I1128 12:11:07.736405 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/927808a1-7261-4ddb-961f-302a544cb77c-bundle\") pod \"927808a1-7261-4ddb-961f-302a544cb77c\" (UID: \"927808a1-7261-4ddb-961f-302a544cb77c\") " Nov 28 12:11:07 crc kubenswrapper[5030]: I1128 12:11:07.736621 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/927808a1-7261-4ddb-961f-302a544cb77c-util\") pod \"927808a1-7261-4ddb-961f-302a544cb77c\" (UID: \"927808a1-7261-4ddb-961f-302a544cb77c\") " Nov 28 12:11:07 crc kubenswrapper[5030]: I1128 12:11:07.736756 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrzwq\" (UniqueName: \"kubernetes.io/projected/927808a1-7261-4ddb-961f-302a544cb77c-kube-api-access-xrzwq\") pod \"927808a1-7261-4ddb-961f-302a544cb77c\" (UID: \"927808a1-7261-4ddb-961f-302a544cb77c\") " Nov 28 12:11:07 crc kubenswrapper[5030]: I1128 12:11:07.738256 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/927808a1-7261-4ddb-961f-302a544cb77c-bundle" (OuterVolumeSpecName: "bundle") pod "927808a1-7261-4ddb-961f-302a544cb77c" (UID: "927808a1-7261-4ddb-961f-302a544cb77c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:11:07 crc kubenswrapper[5030]: I1128 12:11:07.748512 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/927808a1-7261-4ddb-961f-302a544cb77c-util" (OuterVolumeSpecName: "util") pod "927808a1-7261-4ddb-961f-302a544cb77c" (UID: "927808a1-7261-4ddb-961f-302a544cb77c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:11:07 crc kubenswrapper[5030]: I1128 12:11:07.750551 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927808a1-7261-4ddb-961f-302a544cb77c-kube-api-access-xrzwq" (OuterVolumeSpecName: "kube-api-access-xrzwq") pod "927808a1-7261-4ddb-961f-302a544cb77c" (UID: "927808a1-7261-4ddb-961f-302a544cb77c"). InnerVolumeSpecName "kube-api-access-xrzwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:11:07 crc kubenswrapper[5030]: I1128 12:11:07.839165 5030 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/927808a1-7261-4ddb-961f-302a544cb77c-util\") on node \"crc\" DevicePath \"\"" Nov 28 12:11:07 crc kubenswrapper[5030]: I1128 12:11:07.839210 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrzwq\" (UniqueName: \"kubernetes.io/projected/927808a1-7261-4ddb-961f-302a544cb77c-kube-api-access-xrzwq\") on node \"crc\" DevicePath \"\"" Nov 28 12:11:07 crc kubenswrapper[5030]: I1128 12:11:07.839224 5030 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/927808a1-7261-4ddb-961f-302a544cb77c-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:11:08 crc kubenswrapper[5030]: I1128 12:11:08.171555 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb" event={"ID":"927808a1-7261-4ddb-961f-302a544cb77c","Type":"ContainerDied","Data":"e09e18813ee6c3834956a179f1d16990bcc584d5e73b4cb8fdbfaa8c543ef00e"} Nov 28 12:11:08 crc kubenswrapper[5030]: I1128 12:11:08.171617 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb" Nov 28 12:11:08 crc kubenswrapper[5030]: I1128 12:11:08.171651 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e09e18813ee6c3834956a179f1d16990bcc584d5e73b4cb8fdbfaa8c543ef00e" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.565364 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-86dcdc6f89-snck4"] Nov 28 12:11:16 crc kubenswrapper[5030]: E1128 12:11:16.566330 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da742ad-42c7-4812-b7ee-04df6e644c0e" containerName="extract" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.566347 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da742ad-42c7-4812-b7ee-04df6e644c0e" containerName="extract" Nov 28 12:11:16 crc kubenswrapper[5030]: E1128 12:11:16.566367 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da742ad-42c7-4812-b7ee-04df6e644c0e" containerName="util" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.566375 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da742ad-42c7-4812-b7ee-04df6e644c0e" containerName="util" Nov 28 12:11:16 crc kubenswrapper[5030]: E1128 12:11:16.566389 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="927808a1-7261-4ddb-961f-302a544cb77c" containerName="util" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.566396 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="927808a1-7261-4ddb-961f-302a544cb77c" containerName="util" Nov 28 12:11:16 crc kubenswrapper[5030]: E1128 12:11:16.566413 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da742ad-42c7-4812-b7ee-04df6e644c0e" containerName="pull" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.566421 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da742ad-42c7-4812-b7ee-04df6e644c0e" containerName="pull" Nov 28 12:11:16 crc kubenswrapper[5030]: E1128 12:11:16.566432 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="927808a1-7261-4ddb-961f-302a544cb77c" containerName="extract" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.566438 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="927808a1-7261-4ddb-961f-302a544cb77c" containerName="extract" Nov 28 12:11:16 crc kubenswrapper[5030]: E1128 12:11:16.566449 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="927808a1-7261-4ddb-961f-302a544cb77c" containerName="pull" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.566455 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="927808a1-7261-4ddb-961f-302a544cb77c" containerName="pull" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.566599 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="927808a1-7261-4ddb-961f-302a544cb77c" containerName="extract" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.566619 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da742ad-42c7-4812-b7ee-04df6e644c0e" containerName="extract" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.567154 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-86dcdc6f89-snck4" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.569829 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-service-cert" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.569869 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-mmlwt" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.581939 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-86dcdc6f89-snck4"] Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.589312 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/15d81285-0f77-422a-9189-d17114debbfc-webhook-cert\") pod \"horizon-operator-controller-manager-86dcdc6f89-snck4\" (UID: \"15d81285-0f77-422a-9189-d17114debbfc\") " pod="openstack-operators/horizon-operator-controller-manager-86dcdc6f89-snck4" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.589434 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjfbx\" (UniqueName: \"kubernetes.io/projected/15d81285-0f77-422a-9189-d17114debbfc-kube-api-access-qjfbx\") pod \"horizon-operator-controller-manager-86dcdc6f89-snck4\" (UID: \"15d81285-0f77-422a-9189-d17114debbfc\") " pod="openstack-operators/horizon-operator-controller-manager-86dcdc6f89-snck4" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.589555 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/15d81285-0f77-422a-9189-d17114debbfc-apiservice-cert\") pod \"horizon-operator-controller-manager-86dcdc6f89-snck4\" (UID: \"15d81285-0f77-422a-9189-d17114debbfc\") " pod="openstack-operators/horizon-operator-controller-manager-86dcdc6f89-snck4" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.690399 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjfbx\" (UniqueName: \"kubernetes.io/projected/15d81285-0f77-422a-9189-d17114debbfc-kube-api-access-qjfbx\") pod \"horizon-operator-controller-manager-86dcdc6f89-snck4\" (UID: \"15d81285-0f77-422a-9189-d17114debbfc\") " pod="openstack-operators/horizon-operator-controller-manager-86dcdc6f89-snck4" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.690490 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/15d81285-0f77-422a-9189-d17114debbfc-apiservice-cert\") pod \"horizon-operator-controller-manager-86dcdc6f89-snck4\" (UID: \"15d81285-0f77-422a-9189-d17114debbfc\") " pod="openstack-operators/horizon-operator-controller-manager-86dcdc6f89-snck4" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.690551 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/15d81285-0f77-422a-9189-d17114debbfc-webhook-cert\") pod \"horizon-operator-controller-manager-86dcdc6f89-snck4\" (UID: \"15d81285-0f77-422a-9189-d17114debbfc\") " pod="openstack-operators/horizon-operator-controller-manager-86dcdc6f89-snck4" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.698763 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/15d81285-0f77-422a-9189-d17114debbfc-apiservice-cert\") pod \"horizon-operator-controller-manager-86dcdc6f89-snck4\" (UID: \"15d81285-0f77-422a-9189-d17114debbfc\") " pod="openstack-operators/horizon-operator-controller-manager-86dcdc6f89-snck4" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.699489 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/15d81285-0f77-422a-9189-d17114debbfc-webhook-cert\") pod \"horizon-operator-controller-manager-86dcdc6f89-snck4\" (UID: \"15d81285-0f77-422a-9189-d17114debbfc\") " pod="openstack-operators/horizon-operator-controller-manager-86dcdc6f89-snck4" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.725539 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjfbx\" (UniqueName: \"kubernetes.io/projected/15d81285-0f77-422a-9189-d17114debbfc-kube-api-access-qjfbx\") pod \"horizon-operator-controller-manager-86dcdc6f89-snck4\" (UID: \"15d81285-0f77-422a-9189-d17114debbfc\") " pod="openstack-operators/horizon-operator-controller-manager-86dcdc6f89-snck4" Nov 28 12:11:16 crc kubenswrapper[5030]: I1128 12:11:16.888375 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-86dcdc6f89-snck4" Nov 28 12:11:17 crc kubenswrapper[5030]: I1128 12:11:17.412724 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-86dcdc6f89-snck4"] Nov 28 12:11:18 crc kubenswrapper[5030]: I1128 12:11:18.258703 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-86dcdc6f89-snck4" event={"ID":"15d81285-0f77-422a-9189-d17114debbfc","Type":"ContainerStarted","Data":"b51e8d84f08732f2eb0978c8f083ca0730a3fe4a931ef6c7ad4c8d09f1440a0e"} Nov 28 12:11:19 crc kubenswrapper[5030]: I1128 12:11:19.983264 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/keystone-5854d7bc86-t2mhb" Nov 28 12:11:21 crc kubenswrapper[5030]: I1128 12:11:21.292400 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-86dcdc6f89-snck4" event={"ID":"15d81285-0f77-422a-9189-d17114debbfc","Type":"ContainerStarted","Data":"54a20f9818a9308b58a06c02f03a0bff24e8226b465cd85f955fb56cb6696047"} Nov 28 12:11:21 crc kubenswrapper[5030]: I1128 12:11:21.293652 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-86dcdc6f89-snck4" Nov 28 12:11:21 crc kubenswrapper[5030]: I1128 12:11:21.316807 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-86dcdc6f89-snck4" podStartSLOduration=2.139953784 podStartE2EDuration="5.316783812s" podCreationTimestamp="2025-11-28 12:11:16 +0000 UTC" firstStartedPulling="2025-11-28 12:11:17.394625311 +0000 UTC m=+1095.336368004" lastFinishedPulling="2025-11-28 12:11:20.571455339 +0000 UTC m=+1098.513198032" observedRunningTime="2025-11-28 12:11:21.307274945 +0000 UTC m=+1099.249017638" watchObservedRunningTime="2025-11-28 12:11:21.316783812 +0000 UTC m=+1099.258526495" Nov 28 12:11:23 crc kubenswrapper[5030]: I1128 12:11:23.358275 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d968d985d-jqp2z"] Nov 28 12:11:23 crc kubenswrapper[5030]: I1128 12:11:23.359175 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7d968d985d-jqp2z" Nov 28 12:11:23 crc kubenswrapper[5030]: I1128 12:11:23.361244 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-service-cert" Nov 28 12:11:23 crc kubenswrapper[5030]: I1128 12:11:23.361433 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-gmj25" Nov 28 12:11:23 crc kubenswrapper[5030]: I1128 12:11:23.369450 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d968d985d-jqp2z"] Nov 28 12:11:23 crc kubenswrapper[5030]: I1128 12:11:23.503559 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29nt5\" (UniqueName: \"kubernetes.io/projected/8840337f-4675-46f8-b78b-5097e685fe53-kube-api-access-29nt5\") pod \"swift-operator-controller-manager-7d968d985d-jqp2z\" (UID: \"8840337f-4675-46f8-b78b-5097e685fe53\") " pod="openstack-operators/swift-operator-controller-manager-7d968d985d-jqp2z" Nov 28 12:11:23 crc kubenswrapper[5030]: I1128 12:11:23.503693 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8840337f-4675-46f8-b78b-5097e685fe53-apiservice-cert\") pod \"swift-operator-controller-manager-7d968d985d-jqp2z\" (UID: \"8840337f-4675-46f8-b78b-5097e685fe53\") " pod="openstack-operators/swift-operator-controller-manager-7d968d985d-jqp2z" Nov 28 12:11:23 crc kubenswrapper[5030]: I1128 12:11:23.503808 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8840337f-4675-46f8-b78b-5097e685fe53-webhook-cert\") pod \"swift-operator-controller-manager-7d968d985d-jqp2z\" (UID: \"8840337f-4675-46f8-b78b-5097e685fe53\") " pod="openstack-operators/swift-operator-controller-manager-7d968d985d-jqp2z" Nov 28 12:11:23 crc kubenswrapper[5030]: I1128 12:11:23.604946 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29nt5\" (UniqueName: \"kubernetes.io/projected/8840337f-4675-46f8-b78b-5097e685fe53-kube-api-access-29nt5\") pod \"swift-operator-controller-manager-7d968d985d-jqp2z\" (UID: \"8840337f-4675-46f8-b78b-5097e685fe53\") " pod="openstack-operators/swift-operator-controller-manager-7d968d985d-jqp2z" Nov 28 12:11:23 crc kubenswrapper[5030]: I1128 12:11:23.605009 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8840337f-4675-46f8-b78b-5097e685fe53-apiservice-cert\") pod \"swift-operator-controller-manager-7d968d985d-jqp2z\" (UID: \"8840337f-4675-46f8-b78b-5097e685fe53\") " pod="openstack-operators/swift-operator-controller-manager-7d968d985d-jqp2z" Nov 28 12:11:23 crc kubenswrapper[5030]: I1128 12:11:23.605066 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8840337f-4675-46f8-b78b-5097e685fe53-webhook-cert\") pod \"swift-operator-controller-manager-7d968d985d-jqp2z\" (UID: \"8840337f-4675-46f8-b78b-5097e685fe53\") " pod="openstack-operators/swift-operator-controller-manager-7d968d985d-jqp2z" Nov 28 12:11:23 crc kubenswrapper[5030]: I1128 12:11:23.619481 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8840337f-4675-46f8-b78b-5097e685fe53-webhook-cert\") pod \"swift-operator-controller-manager-7d968d985d-jqp2z\" (UID: \"8840337f-4675-46f8-b78b-5097e685fe53\") " pod="openstack-operators/swift-operator-controller-manager-7d968d985d-jqp2z" Nov 28 12:11:23 crc kubenswrapper[5030]: I1128 12:11:23.619481 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8840337f-4675-46f8-b78b-5097e685fe53-apiservice-cert\") pod \"swift-operator-controller-manager-7d968d985d-jqp2z\" (UID: \"8840337f-4675-46f8-b78b-5097e685fe53\") " pod="openstack-operators/swift-operator-controller-manager-7d968d985d-jqp2z" Nov 28 12:11:23 crc kubenswrapper[5030]: I1128 12:11:23.623525 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29nt5\" (UniqueName: \"kubernetes.io/projected/8840337f-4675-46f8-b78b-5097e685fe53-kube-api-access-29nt5\") pod \"swift-operator-controller-manager-7d968d985d-jqp2z\" (UID: \"8840337f-4675-46f8-b78b-5097e685fe53\") " pod="openstack-operators/swift-operator-controller-manager-7d968d985d-jqp2z" Nov 28 12:11:23 crc kubenswrapper[5030]: I1128 12:11:23.675386 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7d968d985d-jqp2z" Nov 28 12:11:24 crc kubenswrapper[5030]: I1128 12:11:24.131015 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d968d985d-jqp2z"] Nov 28 12:11:24 crc kubenswrapper[5030]: I1128 12:11:24.318108 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7d968d985d-jqp2z" event={"ID":"8840337f-4675-46f8-b78b-5097e685fe53","Type":"ContainerStarted","Data":"1c24a2e04ed5ee3da3825bccf0e58db83ef7b02ee067c4fcb468d31bc253ac72"} Nov 28 12:11:26 crc kubenswrapper[5030]: I1128 12:11:26.893595 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-86dcdc6f89-snck4" Nov 28 12:11:27 crc kubenswrapper[5030]: I1128 12:11:27.340186 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7d968d985d-jqp2z" event={"ID":"8840337f-4675-46f8-b78b-5097e685fe53","Type":"ContainerStarted","Data":"26eb7a14716390024e34831ef802fdac63f505ee010776220b6d24e164ae8a4d"} Nov 28 12:11:27 crc kubenswrapper[5030]: I1128 12:11:27.340370 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-7d968d985d-jqp2z" Nov 28 12:11:27 crc kubenswrapper[5030]: I1128 12:11:27.365203 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-7d968d985d-jqp2z" podStartSLOduration=2.1436395089999998 podStartE2EDuration="4.365183738s" podCreationTimestamp="2025-11-28 12:11:23 +0000 UTC" firstStartedPulling="2025-11-28 12:11:24.13480532 +0000 UTC m=+1102.076547993" lastFinishedPulling="2025-11-28 12:11:26.356349539 +0000 UTC m=+1104.298092222" observedRunningTime="2025-11-28 12:11:27.363157712 +0000 UTC m=+1105.304900405" watchObservedRunningTime="2025-11-28 12:11:27.365183738 +0000 UTC m=+1105.306926421" Nov 28 12:11:33 crc kubenswrapper[5030]: I1128 12:11:33.201745 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:11:33 crc kubenswrapper[5030]: I1128 12:11:33.202568 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:11:33 crc kubenswrapper[5030]: I1128 12:11:33.680514 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-7d968d985d-jqp2z" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.639448 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.644407 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.648046 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-storage-config-data" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.648303 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-ring-files" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.648464 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"swift-swift-dockercfg-btjhr" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.665362 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"swift-conf" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.671928 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.731308 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-vdwnx"] Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.732121 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.733791 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"swift-proxy-config-data" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.733863 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-ring-scripts" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.734925 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-ring-config-data" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.745308 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-vdwnx"] Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.813416 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.813534 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpl5r\" (UniqueName: \"kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-kube-api-access-zpl5r\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.813582 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c3818002-2687-4201-8ceb-f0272289cab9-cache\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.813605 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c3818002-2687-4201-8ceb-f0272289cab9-lock\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.813636 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.916129 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwb28\" (UniqueName: \"kubernetes.io/projected/b04753c6-7d4f-472c-89b9-9ef512737377-kube-api-access-dwb28\") pod \"swift-ring-rebalance-vdwnx\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.916227 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.916264 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b04753c6-7d4f-472c-89b9-9ef512737377-scripts\") pod \"swift-ring-rebalance-vdwnx\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.916303 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b04753c6-7d4f-472c-89b9-9ef512737377-swiftconf\") pod \"swift-ring-rebalance-vdwnx\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.916331 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b04753c6-7d4f-472c-89b9-9ef512737377-ring-data-devices\") pod \"swift-ring-rebalance-vdwnx\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.916373 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b04753c6-7d4f-472c-89b9-9ef512737377-dispersionconf\") pod \"swift-ring-rebalance-vdwnx\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.916407 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpl5r\" (UniqueName: \"kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-kube-api-access-zpl5r\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.916565 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b04753c6-7d4f-472c-89b9-9ef512737377-etc-swift\") pod \"swift-ring-rebalance-vdwnx\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.916589 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") device mount path \"/mnt/openstack/pv03\"" pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.916743 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c3818002-2687-4201-8ceb-f0272289cab9-cache\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.916810 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c3818002-2687-4201-8ceb-f0272289cab9-lock\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.916871 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:39 crc kubenswrapper[5030]: E1128 12:11:39.917073 5030 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Nov 28 12:11:39 crc kubenswrapper[5030]: E1128 12:11:39.917103 5030 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Nov 28 12:11:39 crc kubenswrapper[5030]: E1128 12:11:39.917164 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift podName:c3818002-2687-4201-8ceb-f0272289cab9 nodeName:}" failed. No retries permitted until 2025-11-28 12:11:40.417141926 +0000 UTC m=+1118.358884609 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift") pod "swift-storage-0" (UID: "c3818002-2687-4201-8ceb-f0272289cab9") : configmap "swift-ring-files" not found Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.917190 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c3818002-2687-4201-8ceb-f0272289cab9-cache\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.917313 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c3818002-2687-4201-8ceb-f0272289cab9-lock\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.940671 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:39 crc kubenswrapper[5030]: I1128 12:11:39.947269 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpl5r\" (UniqueName: \"kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-kube-api-access-zpl5r\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.018298 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwb28\" (UniqueName: \"kubernetes.io/projected/b04753c6-7d4f-472c-89b9-9ef512737377-kube-api-access-dwb28\") pod \"swift-ring-rebalance-vdwnx\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.018686 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b04753c6-7d4f-472c-89b9-9ef512737377-scripts\") pod \"swift-ring-rebalance-vdwnx\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.018730 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b04753c6-7d4f-472c-89b9-9ef512737377-swiftconf\") pod \"swift-ring-rebalance-vdwnx\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.018752 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b04753c6-7d4f-472c-89b9-9ef512737377-ring-data-devices\") pod \"swift-ring-rebalance-vdwnx\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.018784 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b04753c6-7d4f-472c-89b9-9ef512737377-dispersionconf\") pod \"swift-ring-rebalance-vdwnx\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.018819 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b04753c6-7d4f-472c-89b9-9ef512737377-etc-swift\") pod \"swift-ring-rebalance-vdwnx\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.019512 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b04753c6-7d4f-472c-89b9-9ef512737377-etc-swift\") pod \"swift-ring-rebalance-vdwnx\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.019693 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b04753c6-7d4f-472c-89b9-9ef512737377-scripts\") pod \"swift-ring-rebalance-vdwnx\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.020965 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b04753c6-7d4f-472c-89b9-9ef512737377-ring-data-devices\") pod \"swift-ring-rebalance-vdwnx\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.023032 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b04753c6-7d4f-472c-89b9-9ef512737377-swiftconf\") pod \"swift-ring-rebalance-vdwnx\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.023766 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b04753c6-7d4f-472c-89b9-9ef512737377-dispersionconf\") pod \"swift-ring-rebalance-vdwnx\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.042067 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwb28\" (UniqueName: \"kubernetes.io/projected/b04753c6-7d4f-472c-89b9-9ef512737377-kube-api-access-dwb28\") pod \"swift-ring-rebalance-vdwnx\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.045106 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.423611 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:40 crc kubenswrapper[5030]: E1128 12:11:40.423890 5030 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Nov 28 12:11:40 crc kubenswrapper[5030]: E1128 12:11:40.423932 5030 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Nov 28 12:11:40 crc kubenswrapper[5030]: E1128 12:11:40.424005 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift podName:c3818002-2687-4201-8ceb-f0272289cab9 nodeName:}" failed. No retries permitted until 2025-11-28 12:11:41.423981031 +0000 UTC m=+1119.365723714 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift") pod "swift-storage-0" (UID: "c3818002-2687-4201-8ceb-f0272289cab9") : configmap "swift-ring-files" not found Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.533302 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-vdwnx"] Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.655708 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-index-llnqd"] Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.656559 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-llnqd" Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.658980 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-index-dockercfg-895bd" Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.669316 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-llnqd"] Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.837638 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8txdq\" (UniqueName: \"kubernetes.io/projected/8bfb5317-9e89-460a-b5ae-5d553d2c9eba-kube-api-access-8txdq\") pod \"glance-operator-index-llnqd\" (UID: \"8bfb5317-9e89-460a-b5ae-5d553d2c9eba\") " pod="openstack-operators/glance-operator-index-llnqd" Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.939415 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8txdq\" (UniqueName: \"kubernetes.io/projected/8bfb5317-9e89-460a-b5ae-5d553d2c9eba-kube-api-access-8txdq\") pod \"glance-operator-index-llnqd\" (UID: \"8bfb5317-9e89-460a-b5ae-5d553d2c9eba\") " pod="openstack-operators/glance-operator-index-llnqd" Nov 28 12:11:40 crc kubenswrapper[5030]: I1128 12:11:40.966634 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8txdq\" (UniqueName: \"kubernetes.io/projected/8bfb5317-9e89-460a-b5ae-5d553d2c9eba-kube-api-access-8txdq\") pod \"glance-operator-index-llnqd\" (UID: \"8bfb5317-9e89-460a-b5ae-5d553d2c9eba\") " pod="openstack-operators/glance-operator-index-llnqd" Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.026410 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-llnqd" Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.446144 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:41 crc kubenswrapper[5030]: E1128 12:11:41.446352 5030 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Nov 28 12:11:41 crc kubenswrapper[5030]: E1128 12:11:41.446709 5030 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Nov 28 12:11:41 crc kubenswrapper[5030]: E1128 12:11:41.446809 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift podName:c3818002-2687-4201-8ceb-f0272289cab9 nodeName:}" failed. No retries permitted until 2025-11-28 12:11:43.446781556 +0000 UTC m=+1121.388524239 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift") pod "swift-storage-0" (UID: "c3818002-2687-4201-8ceb-f0272289cab9") : configmap "swift-ring-files" not found Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.456623 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" event={"ID":"b04753c6-7d4f-472c-89b9-9ef512737377","Type":"ContainerStarted","Data":"b5a431207c836d196fef015a45745dedc4f33fd69b768e88d13830f527723e6a"} Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.493548 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-llnqd"] Nov 28 12:11:41 crc kubenswrapper[5030]: W1128 12:11:41.505185 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bfb5317_9e89_460a_b5ae_5d553d2c9eba.slice/crio-529ff223123b7263c0cd3ac8b4db8f8026d54b48f7df84eeb55e81fd58d83ee5 WatchSource:0}: Error finding container 529ff223123b7263c0cd3ac8b4db8f8026d54b48f7df84eeb55e81fd58d83ee5: Status 404 returned error can't find the container with id 529ff223123b7263c0cd3ac8b4db8f8026d54b48f7df84eeb55e81fd58d83ee5 Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.566903 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck"] Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.568357 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.578526 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck"] Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.757332 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3caf6ea0-05f6-4415-8486-f0472d654719-log-httpd\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.757394 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3caf6ea0-05f6-4415-8486-f0472d654719-run-httpd\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.757503 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-etc-swift\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.757863 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljtgp\" (UniqueName: \"kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-kube-api-access-ljtgp\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.757974 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3caf6ea0-05f6-4415-8486-f0472d654719-config-data\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.859211 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-etc-swift\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.859275 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljtgp\" (UniqueName: \"kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-kube-api-access-ljtgp\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.859304 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3caf6ea0-05f6-4415-8486-f0472d654719-config-data\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.859347 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3caf6ea0-05f6-4415-8486-f0472d654719-log-httpd\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.859375 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3caf6ea0-05f6-4415-8486-f0472d654719-run-httpd\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.859943 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3caf6ea0-05f6-4415-8486-f0472d654719-run-httpd\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:41 crc kubenswrapper[5030]: E1128 12:11:41.860042 5030 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Nov 28 12:11:41 crc kubenswrapper[5030]: E1128 12:11:41.860062 5030 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck: configmap "swift-ring-files" not found Nov 28 12:11:41 crc kubenswrapper[5030]: E1128 12:11:41.860103 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-etc-swift podName:3caf6ea0-05f6-4415-8486-f0472d654719 nodeName:}" failed. No retries permitted until 2025-11-28 12:11:42.360088571 +0000 UTC m=+1120.301831254 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-etc-swift") pod "swift-proxy-6bd58cfcf7-cslck" (UID: "3caf6ea0-05f6-4415-8486-f0472d654719") : configmap "swift-ring-files" not found Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.860630 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3caf6ea0-05f6-4415-8486-f0472d654719-log-httpd\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.866267 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3caf6ea0-05f6-4415-8486-f0472d654719-config-data\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:41 crc kubenswrapper[5030]: I1128 12:11:41.875082 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljtgp\" (UniqueName: \"kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-kube-api-access-ljtgp\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:42 crc kubenswrapper[5030]: I1128 12:11:42.367829 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-etc-swift\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:42 crc kubenswrapper[5030]: E1128 12:11:42.368027 5030 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Nov 28 12:11:42 crc kubenswrapper[5030]: E1128 12:11:42.368271 5030 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck: configmap "swift-ring-files" not found Nov 28 12:11:42 crc kubenswrapper[5030]: E1128 12:11:42.368331 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-etc-swift podName:3caf6ea0-05f6-4415-8486-f0472d654719 nodeName:}" failed. No retries permitted until 2025-11-28 12:11:43.368311094 +0000 UTC m=+1121.310053777 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-etc-swift") pod "swift-proxy-6bd58cfcf7-cslck" (UID: "3caf6ea0-05f6-4415-8486-f0472d654719") : configmap "swift-ring-files" not found Nov 28 12:11:42 crc kubenswrapper[5030]: I1128 12:11:42.466658 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-llnqd" event={"ID":"8bfb5317-9e89-460a-b5ae-5d553d2c9eba","Type":"ContainerStarted","Data":"529ff223123b7263c0cd3ac8b4db8f8026d54b48f7df84eeb55e81fd58d83ee5"} Nov 28 12:11:43 crc kubenswrapper[5030]: I1128 12:11:43.382623 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-etc-swift\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:43 crc kubenswrapper[5030]: E1128 12:11:43.382874 5030 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Nov 28 12:11:43 crc kubenswrapper[5030]: E1128 12:11:43.382916 5030 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck: configmap "swift-ring-files" not found Nov 28 12:11:43 crc kubenswrapper[5030]: E1128 12:11:43.382997 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-etc-swift podName:3caf6ea0-05f6-4415-8486-f0472d654719 nodeName:}" failed. No retries permitted until 2025-11-28 12:11:45.38296613 +0000 UTC m=+1123.324708813 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-etc-swift") pod "swift-proxy-6bd58cfcf7-cslck" (UID: "3caf6ea0-05f6-4415-8486-f0472d654719") : configmap "swift-ring-files" not found Nov 28 12:11:43 crc kubenswrapper[5030]: I1128 12:11:43.484760 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:43 crc kubenswrapper[5030]: E1128 12:11:43.484965 5030 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Nov 28 12:11:43 crc kubenswrapper[5030]: E1128 12:11:43.484979 5030 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Nov 28 12:11:43 crc kubenswrapper[5030]: E1128 12:11:43.485025 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift podName:c3818002-2687-4201-8ceb-f0272289cab9 nodeName:}" failed. No retries permitted until 2025-11-28 12:11:47.485010299 +0000 UTC m=+1125.426752982 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift") pod "swift-storage-0" (UID: "c3818002-2687-4201-8ceb-f0272289cab9") : configmap "swift-ring-files" not found Nov 28 12:11:45 crc kubenswrapper[5030]: I1128 12:11:45.421875 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-etc-swift\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:45 crc kubenswrapper[5030]: E1128 12:11:45.422125 5030 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Nov 28 12:11:45 crc kubenswrapper[5030]: E1128 12:11:45.422401 5030 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck: configmap "swift-ring-files" not found Nov 28 12:11:45 crc kubenswrapper[5030]: E1128 12:11:45.422483 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-etc-swift podName:3caf6ea0-05f6-4415-8486-f0472d654719 nodeName:}" failed. No retries permitted until 2025-11-28 12:11:49.422449146 +0000 UTC m=+1127.364191829 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-etc-swift") pod "swift-proxy-6bd58cfcf7-cslck" (UID: "3caf6ea0-05f6-4415-8486-f0472d654719") : configmap "swift-ring-files" not found Nov 28 12:11:47 crc kubenswrapper[5030]: I1128 12:11:47.576798 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:47 crc kubenswrapper[5030]: E1128 12:11:47.577025 5030 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Nov 28 12:11:47 crc kubenswrapper[5030]: E1128 12:11:47.577061 5030 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Nov 28 12:11:47 crc kubenswrapper[5030]: E1128 12:11:47.577120 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift podName:c3818002-2687-4201-8ceb-f0272289cab9 nodeName:}" failed. No retries permitted until 2025-11-28 12:11:55.577101446 +0000 UTC m=+1133.518844129 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift") pod "swift-storage-0" (UID: "c3818002-2687-4201-8ceb-f0272289cab9") : configmap "swift-ring-files" not found Nov 28 12:11:49 crc kubenswrapper[5030]: I1128 12:11:49.509788 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-etc-swift\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:49 crc kubenswrapper[5030]: E1128 12:11:49.510095 5030 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Nov 28 12:11:49 crc kubenswrapper[5030]: E1128 12:11:49.510220 5030 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck: configmap "swift-ring-files" not found Nov 28 12:11:49 crc kubenswrapper[5030]: E1128 12:11:49.510284 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-etc-swift podName:3caf6ea0-05f6-4415-8486-f0472d654719 nodeName:}" failed. No retries permitted until 2025-11-28 12:11:57.510264078 +0000 UTC m=+1135.452006771 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-etc-swift") pod "swift-proxy-6bd58cfcf7-cslck" (UID: "3caf6ea0-05f6-4415-8486-f0472d654719") : configmap "swift-ring-files" not found Nov 28 12:11:50 crc kubenswrapper[5030]: I1128 12:11:50.532970 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" event={"ID":"b04753c6-7d4f-472c-89b9-9ef512737377","Type":"ContainerStarted","Data":"c1a0571d1f602791aff354e05a24983afd0e3c9f89eda1fdec9f488791cbf83b"} Nov 28 12:11:50 crc kubenswrapper[5030]: I1128 12:11:50.536801 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-llnqd" event={"ID":"8bfb5317-9e89-460a-b5ae-5d553d2c9eba","Type":"ContainerStarted","Data":"a9b966cd83fecaaf248933dccba2cfd18975f03368fb6cf1863192422da1bcd3"} Nov 28 12:11:50 crc kubenswrapper[5030]: I1128 12:11:50.562236 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" podStartSLOduration=2.881251824 podStartE2EDuration="11.562205562s" podCreationTimestamp="2025-11-28 12:11:39 +0000 UTC" firstStartedPulling="2025-11-28 12:11:40.539845653 +0000 UTC m=+1118.481588336" lastFinishedPulling="2025-11-28 12:11:49.220799391 +0000 UTC m=+1127.162542074" observedRunningTime="2025-11-28 12:11:50.553368663 +0000 UTC m=+1128.495111366" watchObservedRunningTime="2025-11-28 12:11:50.562205562 +0000 UTC m=+1128.503948255" Nov 28 12:11:50 crc kubenswrapper[5030]: I1128 12:11:50.604802 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-index-llnqd" podStartSLOduration=2.063306876 podStartE2EDuration="10.604520637s" podCreationTimestamp="2025-11-28 12:11:40 +0000 UTC" firstStartedPulling="2025-11-28 12:11:41.507566319 +0000 UTC m=+1119.449308992" lastFinishedPulling="2025-11-28 12:11:50.04878006 +0000 UTC m=+1127.990522753" observedRunningTime="2025-11-28 12:11:50.580373853 +0000 UTC m=+1128.522116566" watchObservedRunningTime="2025-11-28 12:11:50.604520637 +0000 UTC m=+1128.546263360" Nov 28 12:11:51 crc kubenswrapper[5030]: I1128 12:11:51.027404 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-index-llnqd" Nov 28 12:11:51 crc kubenswrapper[5030]: I1128 12:11:51.027501 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/glance-operator-index-llnqd" Nov 28 12:11:51 crc kubenswrapper[5030]: I1128 12:11:51.072666 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/glance-operator-index-llnqd" Nov 28 12:11:55 crc kubenswrapper[5030]: I1128 12:11:55.642428 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:11:55 crc kubenswrapper[5030]: E1128 12:11:55.643247 5030 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Nov 28 12:11:55 crc kubenswrapper[5030]: E1128 12:11:55.645003 5030 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Nov 28 12:11:55 crc kubenswrapper[5030]: E1128 12:11:55.645151 5030 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift podName:c3818002-2687-4201-8ceb-f0272289cab9 nodeName:}" failed. No retries permitted until 2025-11-28 12:12:11.645126822 +0000 UTC m=+1149.586869515 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift") pod "swift-storage-0" (UID: "c3818002-2687-4201-8ceb-f0272289cab9") : configmap "swift-ring-files" not found Nov 28 12:11:57 crc kubenswrapper[5030]: I1128 12:11:57.582408 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-etc-swift\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:57 crc kubenswrapper[5030]: I1128 12:11:57.593421 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3caf6ea0-05f6-4415-8486-f0472d654719-etc-swift\") pod \"swift-proxy-6bd58cfcf7-cslck\" (UID: \"3caf6ea0-05f6-4415-8486-f0472d654719\") " pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:57 crc kubenswrapper[5030]: I1128 12:11:57.598276 5030 generic.go:334] "Generic (PLEG): container finished" podID="b04753c6-7d4f-472c-89b9-9ef512737377" containerID="c1a0571d1f602791aff354e05a24983afd0e3c9f89eda1fdec9f488791cbf83b" exitCode=0 Nov 28 12:11:57 crc kubenswrapper[5030]: I1128 12:11:57.598343 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" event={"ID":"b04753c6-7d4f-472c-89b9-9ef512737377","Type":"ContainerDied","Data":"c1a0571d1f602791aff354e05a24983afd0e3c9f89eda1fdec9f488791cbf83b"} Nov 28 12:11:57 crc kubenswrapper[5030]: I1128 12:11:57.801285 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:58 crc kubenswrapper[5030]: I1128 12:11:58.318040 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck"] Nov 28 12:11:58 crc kubenswrapper[5030]: I1128 12:11:58.616406 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" event={"ID":"3caf6ea0-05f6-4415-8486-f0472d654719","Type":"ContainerStarted","Data":"76d1254611222702d62a56f09530ab5842346cb3862a5b0135c2a74fb668932d"} Nov 28 12:11:58 crc kubenswrapper[5030]: I1128 12:11:58.619294 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" event={"ID":"3caf6ea0-05f6-4415-8486-f0472d654719","Type":"ContainerStarted","Data":"1ab3fcd8a560a2ba54dcb07ced27dc38cf386e871d0e7232f20e28b27dd027b2"} Nov 28 12:11:58 crc kubenswrapper[5030]: I1128 12:11:58.993912 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.140809 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b04753c6-7d4f-472c-89b9-9ef512737377-swiftconf\") pod \"b04753c6-7d4f-472c-89b9-9ef512737377\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.140892 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b04753c6-7d4f-472c-89b9-9ef512737377-etc-swift\") pod \"b04753c6-7d4f-472c-89b9-9ef512737377\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.140993 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b04753c6-7d4f-472c-89b9-9ef512737377-scripts\") pod \"b04753c6-7d4f-472c-89b9-9ef512737377\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.141172 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwb28\" (UniqueName: \"kubernetes.io/projected/b04753c6-7d4f-472c-89b9-9ef512737377-kube-api-access-dwb28\") pod \"b04753c6-7d4f-472c-89b9-9ef512737377\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.141228 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b04753c6-7d4f-472c-89b9-9ef512737377-ring-data-devices\") pod \"b04753c6-7d4f-472c-89b9-9ef512737377\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.141312 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b04753c6-7d4f-472c-89b9-9ef512737377-dispersionconf\") pod \"b04753c6-7d4f-472c-89b9-9ef512737377\" (UID: \"b04753c6-7d4f-472c-89b9-9ef512737377\") " Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.142672 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b04753c6-7d4f-472c-89b9-9ef512737377-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b04753c6-7d4f-472c-89b9-9ef512737377" (UID: "b04753c6-7d4f-472c-89b9-9ef512737377"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.142798 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b04753c6-7d4f-472c-89b9-9ef512737377-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b04753c6-7d4f-472c-89b9-9ef512737377" (UID: "b04753c6-7d4f-472c-89b9-9ef512737377"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.150249 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b04753c6-7d4f-472c-89b9-9ef512737377-kube-api-access-dwb28" (OuterVolumeSpecName: "kube-api-access-dwb28") pod "b04753c6-7d4f-472c-89b9-9ef512737377" (UID: "b04753c6-7d4f-472c-89b9-9ef512737377"). InnerVolumeSpecName "kube-api-access-dwb28". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.179640 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04753c6-7d4f-472c-89b9-9ef512737377-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b04753c6-7d4f-472c-89b9-9ef512737377" (UID: "b04753c6-7d4f-472c-89b9-9ef512737377"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.180527 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04753c6-7d4f-472c-89b9-9ef512737377-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b04753c6-7d4f-472c-89b9-9ef512737377" (UID: "b04753c6-7d4f-472c-89b9-9ef512737377"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.181097 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b04753c6-7d4f-472c-89b9-9ef512737377-scripts" (OuterVolumeSpecName: "scripts") pod "b04753c6-7d4f-472c-89b9-9ef512737377" (UID: "b04753c6-7d4f-472c-89b9-9ef512737377"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.243954 5030 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b04753c6-7d4f-472c-89b9-9ef512737377-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.243998 5030 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b04753c6-7d4f-472c-89b9-9ef512737377-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.244009 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b04753c6-7d4f-472c-89b9-9ef512737377-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.244021 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwb28\" (UniqueName: \"kubernetes.io/projected/b04753c6-7d4f-472c-89b9-9ef512737377-kube-api-access-dwb28\") on node \"crc\" DevicePath \"\"" Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.244036 5030 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b04753c6-7d4f-472c-89b9-9ef512737377-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.244049 5030 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b04753c6-7d4f-472c-89b9-9ef512737377-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.645862 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" event={"ID":"b04753c6-7d4f-472c-89b9-9ef512737377","Type":"ContainerDied","Data":"b5a431207c836d196fef015a45745dedc4f33fd69b768e88d13830f527723e6a"} Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.645952 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5a431207c836d196fef015a45745dedc4f33fd69b768e88d13830f527723e6a" Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.646091 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-vdwnx" Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.682204 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" event={"ID":"3caf6ea0-05f6-4415-8486-f0472d654719","Type":"ContainerStarted","Data":"0a1ce079ea71d7e0d0d42e51ba397995b41a1c793f57995c48c739bedadaeada"} Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.686902 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.686983 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:11:59 crc kubenswrapper[5030]: I1128 12:11:59.719543 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" podStartSLOduration=18.719514532 podStartE2EDuration="18.719514532s" podCreationTimestamp="2025-11-28 12:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:11:59.706430127 +0000 UTC m=+1137.648172820" watchObservedRunningTime="2025-11-28 12:11:59.719514532 +0000 UTC m=+1137.661257225" Nov 28 12:11:59 crc kubenswrapper[5030]: E1128 12:11:59.831858 5030 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb04753c6_7d4f_472c_89b9_9ef512737377.slice\": RecentStats: unable to find data in memory cache]" Nov 28 12:12:01 crc kubenswrapper[5030]: I1128 12:12:01.064879 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-index-llnqd" Nov 28 12:12:03 crc kubenswrapper[5030]: I1128 12:12:03.202673 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:12:03 crc kubenswrapper[5030]: I1128 12:12:03.203303 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:12:03 crc kubenswrapper[5030]: I1128 12:12:03.203382 5030 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 12:12:03 crc kubenswrapper[5030]: I1128 12:12:03.204516 5030 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2b5a0df1bdf326961f0bfd95e325cb1bcebbae770d53c82e197938a5584c8725"} pod="openshift-machine-config-operator/machine-config-daemon-cqr62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 12:12:03 crc kubenswrapper[5030]: I1128 12:12:03.204639 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" containerID="cri-o://2b5a0df1bdf326961f0bfd95e325cb1bcebbae770d53c82e197938a5584c8725" gracePeriod=600 Nov 28 12:12:03 crc kubenswrapper[5030]: I1128 12:12:03.781340 5030 generic.go:334] "Generic (PLEG): container finished" podID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerID="2b5a0df1bdf326961f0bfd95e325cb1bcebbae770d53c82e197938a5584c8725" exitCode=0 Nov 28 12:12:03 crc kubenswrapper[5030]: I1128 12:12:03.781628 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerDied","Data":"2b5a0df1bdf326961f0bfd95e325cb1bcebbae770d53c82e197938a5584c8725"} Nov 28 12:12:03 crc kubenswrapper[5030]: I1128 12:12:03.782017 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerStarted","Data":"a7058c9055a9b9f831de3e82c6637d0fddb246f761f212b4d9db9f0e85aa948a"} Nov 28 12:12:03 crc kubenswrapper[5030]: I1128 12:12:03.782070 5030 scope.go:117] "RemoveContainer" containerID="440c69d6f2693ab24ec11da83e2b2b49568d8223dcdef3effa26def3f51975e3" Nov 28 12:12:07 crc kubenswrapper[5030]: I1128 12:12:07.804736 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:12:07 crc kubenswrapper[5030]: I1128 12:12:07.807637 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/swift-proxy-6bd58cfcf7-cslck" Nov 28 12:12:10 crc kubenswrapper[5030]: I1128 12:12:10.923248 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz"] Nov 28 12:12:10 crc kubenswrapper[5030]: E1128 12:12:10.924300 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04753c6-7d4f-472c-89b9-9ef512737377" containerName="swift-ring-rebalance" Nov 28 12:12:10 crc kubenswrapper[5030]: I1128 12:12:10.924314 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04753c6-7d4f-472c-89b9-9ef512737377" containerName="swift-ring-rebalance" Nov 28 12:12:10 crc kubenswrapper[5030]: I1128 12:12:10.924486 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="b04753c6-7d4f-472c-89b9-9ef512737377" containerName="swift-ring-rebalance" Nov 28 12:12:10 crc kubenswrapper[5030]: I1128 12:12:10.925538 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz" Nov 28 12:12:10 crc kubenswrapper[5030]: I1128 12:12:10.931022 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-br5mr" Nov 28 12:12:10 crc kubenswrapper[5030]: I1128 12:12:10.957158 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz"] Nov 28 12:12:11 crc kubenswrapper[5030]: I1128 12:12:11.110034 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/22820358-bfdc-4f0f-94fd-a31b149e42ff-bundle\") pod \"cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz\" (UID: \"22820358-bfdc-4f0f-94fd-a31b149e42ff\") " pod="openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz" Nov 28 12:12:11 crc kubenswrapper[5030]: I1128 12:12:11.110788 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/22820358-bfdc-4f0f-94fd-a31b149e42ff-util\") pod \"cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz\" (UID: \"22820358-bfdc-4f0f-94fd-a31b149e42ff\") " pod="openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz" Nov 28 12:12:11 crc kubenswrapper[5030]: I1128 12:12:11.110882 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6lxz\" (UniqueName: \"kubernetes.io/projected/22820358-bfdc-4f0f-94fd-a31b149e42ff-kube-api-access-c6lxz\") pod \"cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz\" (UID: \"22820358-bfdc-4f0f-94fd-a31b149e42ff\") " pod="openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz" Nov 28 12:12:11 crc kubenswrapper[5030]: I1128 12:12:11.212945 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/22820358-bfdc-4f0f-94fd-a31b149e42ff-bundle\") pod \"cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz\" (UID: \"22820358-bfdc-4f0f-94fd-a31b149e42ff\") " pod="openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz" Nov 28 12:12:11 crc kubenswrapper[5030]: I1128 12:12:11.213035 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/22820358-bfdc-4f0f-94fd-a31b149e42ff-util\") pod \"cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz\" (UID: \"22820358-bfdc-4f0f-94fd-a31b149e42ff\") " pod="openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz" Nov 28 12:12:11 crc kubenswrapper[5030]: I1128 12:12:11.213067 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6lxz\" (UniqueName: \"kubernetes.io/projected/22820358-bfdc-4f0f-94fd-a31b149e42ff-kube-api-access-c6lxz\") pod \"cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz\" (UID: \"22820358-bfdc-4f0f-94fd-a31b149e42ff\") " pod="openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz" Nov 28 12:12:11 crc kubenswrapper[5030]: I1128 12:12:11.214569 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/22820358-bfdc-4f0f-94fd-a31b149e42ff-util\") pod \"cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz\" (UID: \"22820358-bfdc-4f0f-94fd-a31b149e42ff\") " pod="openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz" Nov 28 12:12:11 crc kubenswrapper[5030]: I1128 12:12:11.214551 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/22820358-bfdc-4f0f-94fd-a31b149e42ff-bundle\") pod \"cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz\" (UID: \"22820358-bfdc-4f0f-94fd-a31b149e42ff\") " pod="openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz" Nov 28 12:12:11 crc kubenswrapper[5030]: I1128 12:12:11.250820 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6lxz\" (UniqueName: \"kubernetes.io/projected/22820358-bfdc-4f0f-94fd-a31b149e42ff-kube-api-access-c6lxz\") pod \"cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz\" (UID: \"22820358-bfdc-4f0f-94fd-a31b149e42ff\") " pod="openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz" Nov 28 12:12:11 crc kubenswrapper[5030]: I1128 12:12:11.262171 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz" Nov 28 12:12:11 crc kubenswrapper[5030]: I1128 12:12:11.600733 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz"] Nov 28 12:12:11 crc kubenswrapper[5030]: I1128 12:12:11.720934 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:12:11 crc kubenswrapper[5030]: I1128 12:12:11.735175 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c3818002-2687-4201-8ceb-f0272289cab9-etc-swift\") pod \"swift-storage-0\" (UID: \"c3818002-2687-4201-8ceb-f0272289cab9\") " pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:12:11 crc kubenswrapper[5030]: I1128 12:12:11.765655 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-storage-0" Nov 28 12:12:11 crc kubenswrapper[5030]: I1128 12:12:11.868789 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz" event={"ID":"22820358-bfdc-4f0f-94fd-a31b149e42ff","Type":"ContainerStarted","Data":"302eebb7c58a3b28f02f231697368e0a038a799a1d3a0ec0e4e7404d33e4d27a"} Nov 28 12:12:12 crc kubenswrapper[5030]: I1128 12:12:12.309135 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Nov 28 12:12:12 crc kubenswrapper[5030]: W1128 12:12:12.354699 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3818002_2687_4201_8ceb_f0272289cab9.slice/crio-6f9f83c3c975dcdecca4f81e12a5949a4b612b98e3a8428ca12f26c4a8a1fdc1 WatchSource:0}: Error finding container 6f9f83c3c975dcdecca4f81e12a5949a4b612b98e3a8428ca12f26c4a8a1fdc1: Status 404 returned error can't find the container with id 6f9f83c3c975dcdecca4f81e12a5949a4b612b98e3a8428ca12f26c4a8a1fdc1 Nov 28 12:12:12 crc kubenswrapper[5030]: I1128 12:12:12.876933 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"c3818002-2687-4201-8ceb-f0272289cab9","Type":"ContainerStarted","Data":"6f9f83c3c975dcdecca4f81e12a5949a4b612b98e3a8428ca12f26c4a8a1fdc1"} Nov 28 12:12:12 crc kubenswrapper[5030]: I1128 12:12:12.878663 5030 generic.go:334] "Generic (PLEG): container finished" podID="22820358-bfdc-4f0f-94fd-a31b149e42ff" containerID="1147f98592f9cbd8c9685f359b6eda95827e928a470c2a08cee75d50955f1e37" exitCode=0 Nov 28 12:12:12 crc kubenswrapper[5030]: I1128 12:12:12.878702 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz" event={"ID":"22820358-bfdc-4f0f-94fd-a31b149e42ff","Type":"ContainerDied","Data":"1147f98592f9cbd8c9685f359b6eda95827e928a470c2a08cee75d50955f1e37"} Nov 28 12:12:14 crc kubenswrapper[5030]: I1128 12:12:14.903746 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"c3818002-2687-4201-8ceb-f0272289cab9","Type":"ContainerStarted","Data":"3fa1c0fe8fc034b17491a4fa73a3a22e0ec6edce31607211a28c75681085fd58"} Nov 28 12:12:14 crc kubenswrapper[5030]: I1128 12:12:14.904976 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"c3818002-2687-4201-8ceb-f0272289cab9","Type":"ContainerStarted","Data":"58193609be3ba0f5ec315419df51a4d983361926cd3d8b2b5c87326dc980c147"} Nov 28 12:12:14 crc kubenswrapper[5030]: I1128 12:12:14.904994 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"c3818002-2687-4201-8ceb-f0272289cab9","Type":"ContainerStarted","Data":"44672d53a9756ab399d69910553e7d1fe327fd9791774608454c3ed6bccb5c63"} Nov 28 12:12:14 crc kubenswrapper[5030]: I1128 12:12:14.907428 5030 generic.go:334] "Generic (PLEG): container finished" podID="22820358-bfdc-4f0f-94fd-a31b149e42ff" containerID="422596e63c7fd5477c6275db680b6f42d3593b796c960e4023886b5947f7f7f8" exitCode=0 Nov 28 12:12:14 crc kubenswrapper[5030]: I1128 12:12:14.907509 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz" event={"ID":"22820358-bfdc-4f0f-94fd-a31b149e42ff","Type":"ContainerDied","Data":"422596e63c7fd5477c6275db680b6f42d3593b796c960e4023886b5947f7f7f8"} Nov 28 12:12:16 crc kubenswrapper[5030]: I1128 12:12:16.924524 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"c3818002-2687-4201-8ceb-f0272289cab9","Type":"ContainerStarted","Data":"e96ee92e8919985266daac500d27e87ebe2480b5a6791fca545e541d138eb5c1"} Nov 28 12:12:16 crc kubenswrapper[5030]: I1128 12:12:16.926541 5030 generic.go:334] "Generic (PLEG): container finished" podID="22820358-bfdc-4f0f-94fd-a31b149e42ff" containerID="be559b2246bbd0e2c6316a652d53353b5ebc1c01f709fa8beffb4f0b4dc0c995" exitCode=0 Nov 28 12:12:16 crc kubenswrapper[5030]: I1128 12:12:16.926583 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz" event={"ID":"22820358-bfdc-4f0f-94fd-a31b149e42ff","Type":"ContainerDied","Data":"be559b2246bbd0e2c6316a652d53353b5ebc1c01f709fa8beffb4f0b4dc0c995"} Nov 28 12:12:18 crc kubenswrapper[5030]: I1128 12:12:18.410661 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz" Nov 28 12:12:18 crc kubenswrapper[5030]: I1128 12:12:18.549204 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/22820358-bfdc-4f0f-94fd-a31b149e42ff-util\") pod \"22820358-bfdc-4f0f-94fd-a31b149e42ff\" (UID: \"22820358-bfdc-4f0f-94fd-a31b149e42ff\") " Nov 28 12:12:18 crc kubenswrapper[5030]: I1128 12:12:18.549649 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6lxz\" (UniqueName: \"kubernetes.io/projected/22820358-bfdc-4f0f-94fd-a31b149e42ff-kube-api-access-c6lxz\") pod \"22820358-bfdc-4f0f-94fd-a31b149e42ff\" (UID: \"22820358-bfdc-4f0f-94fd-a31b149e42ff\") " Nov 28 12:12:18 crc kubenswrapper[5030]: I1128 12:12:18.549757 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/22820358-bfdc-4f0f-94fd-a31b149e42ff-bundle\") pod \"22820358-bfdc-4f0f-94fd-a31b149e42ff\" (UID: \"22820358-bfdc-4f0f-94fd-a31b149e42ff\") " Nov 28 12:12:18 crc kubenswrapper[5030]: I1128 12:12:18.550579 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22820358-bfdc-4f0f-94fd-a31b149e42ff-bundle" (OuterVolumeSpecName: "bundle") pod "22820358-bfdc-4f0f-94fd-a31b149e42ff" (UID: "22820358-bfdc-4f0f-94fd-a31b149e42ff"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:12:18 crc kubenswrapper[5030]: I1128 12:12:18.559239 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22820358-bfdc-4f0f-94fd-a31b149e42ff-util" (OuterVolumeSpecName: "util") pod "22820358-bfdc-4f0f-94fd-a31b149e42ff" (UID: "22820358-bfdc-4f0f-94fd-a31b149e42ff"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:12:18 crc kubenswrapper[5030]: I1128 12:12:18.559586 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22820358-bfdc-4f0f-94fd-a31b149e42ff-kube-api-access-c6lxz" (OuterVolumeSpecName: "kube-api-access-c6lxz") pod "22820358-bfdc-4f0f-94fd-a31b149e42ff" (UID: "22820358-bfdc-4f0f-94fd-a31b149e42ff"). InnerVolumeSpecName "kube-api-access-c6lxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:12:18 crc kubenswrapper[5030]: I1128 12:12:18.653644 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6lxz\" (UniqueName: \"kubernetes.io/projected/22820358-bfdc-4f0f-94fd-a31b149e42ff-kube-api-access-c6lxz\") on node \"crc\" DevicePath \"\"" Nov 28 12:12:18 crc kubenswrapper[5030]: I1128 12:12:18.653697 5030 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/22820358-bfdc-4f0f-94fd-a31b149e42ff-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:12:18 crc kubenswrapper[5030]: I1128 12:12:18.653715 5030 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/22820358-bfdc-4f0f-94fd-a31b149e42ff-util\") on node \"crc\" DevicePath \"\"" Nov 28 12:12:18 crc kubenswrapper[5030]: I1128 12:12:18.967301 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz" Nov 28 12:12:18 crc kubenswrapper[5030]: I1128 12:12:18.967417 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz" event={"ID":"22820358-bfdc-4f0f-94fd-a31b149e42ff","Type":"ContainerDied","Data":"302eebb7c58a3b28f02f231697368e0a038a799a1d3a0ec0e4e7404d33e4d27a"} Nov 28 12:12:18 crc kubenswrapper[5030]: I1128 12:12:18.968091 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="302eebb7c58a3b28f02f231697368e0a038a799a1d3a0ec0e4e7404d33e4d27a" Nov 28 12:12:18 crc kubenswrapper[5030]: I1128 12:12:18.971275 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"c3818002-2687-4201-8ceb-f0272289cab9","Type":"ContainerStarted","Data":"c474ef39ed6f56390a4be079bd5492e4ad63c60a5f5020f5e3ee664e0e765fde"} Nov 28 12:12:18 crc kubenswrapper[5030]: I1128 12:12:18.971306 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"c3818002-2687-4201-8ceb-f0272289cab9","Type":"ContainerStarted","Data":"c3fc900e7311ac21db156765be5eb64c99452b4ff009325d36d06ac17217de4b"} Nov 28 12:12:18 crc kubenswrapper[5030]: I1128 12:12:18.971317 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"c3818002-2687-4201-8ceb-f0272289cab9","Type":"ContainerStarted","Data":"7c3a4575dabc3670426f83457b7fdf46ca949d53255c427572b1df7c1c759e77"} Nov 28 12:12:19 crc kubenswrapper[5030]: I1128 12:12:19.986407 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"c3818002-2687-4201-8ceb-f0272289cab9","Type":"ContainerStarted","Data":"1dd8f6c71472028637ce6a024b85c6626a7c9daa9ca394b46087ffdaf98d0128"} Nov 28 12:12:21 crc kubenswrapper[5030]: I1128 12:12:21.000400 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"c3818002-2687-4201-8ceb-f0272289cab9","Type":"ContainerStarted","Data":"b1751df2871a4d603f73a2b76fcdb4432098d13c928327536804066660d1c450"} Nov 28 12:12:21 crc kubenswrapper[5030]: I1128 12:12:21.000888 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"c3818002-2687-4201-8ceb-f0272289cab9","Type":"ContainerStarted","Data":"470e6304f2d25e9755821c72af344d4ea7a7178d1f8a279851c7b1ed179ac776"} Nov 28 12:12:22 crc kubenswrapper[5030]: I1128 12:12:22.012630 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"c3818002-2687-4201-8ceb-f0272289cab9","Type":"ContainerStarted","Data":"9a23ecebe68d764b16d25775b42977ebced0cb5443ea9177b4200cd4f641376e"} Nov 28 12:12:22 crc kubenswrapper[5030]: I1128 12:12:22.012680 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"c3818002-2687-4201-8ceb-f0272289cab9","Type":"ContainerStarted","Data":"f884eeb01bd08eeb62bae5f17809491b058e14649533e7363212b560de9050c8"} Nov 28 12:12:22 crc kubenswrapper[5030]: I1128 12:12:22.012693 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"c3818002-2687-4201-8ceb-f0272289cab9","Type":"ContainerStarted","Data":"2e6d20f9ccf79321fe42e5b368b37e5f954490fc7f135e1666f1aa540573111a"} Nov 28 12:12:22 crc kubenswrapper[5030]: I1128 12:12:22.012705 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"c3818002-2687-4201-8ceb-f0272289cab9","Type":"ContainerStarted","Data":"1ec424acd26b05de66925c84d2f0edd799b781657d64aa004541967138016089"} Nov 28 12:12:22 crc kubenswrapper[5030]: I1128 12:12:22.012718 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"c3818002-2687-4201-8ceb-f0272289cab9","Type":"ContainerStarted","Data":"6f88a97c45343c2e010c628ab779575be0619d2f7db4cadbe2515475355b4cfe"} Nov 28 12:12:22 crc kubenswrapper[5030]: I1128 12:12:22.055207 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/swift-storage-0" podStartSLOduration=35.843533906 podStartE2EDuration="44.055181225s" podCreationTimestamp="2025-11-28 12:11:38 +0000 UTC" firstStartedPulling="2025-11-28 12:12:12.359337634 +0000 UTC m=+1150.301080317" lastFinishedPulling="2025-11-28 12:12:20.570984953 +0000 UTC m=+1158.512727636" observedRunningTime="2025-11-28 12:12:22.049660526 +0000 UTC m=+1159.991403229" watchObservedRunningTime="2025-11-28 12:12:22.055181225 +0000 UTC m=+1159.996923908" Nov 28 12:12:34 crc kubenswrapper[5030]: I1128 12:12:34.523033 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-6d74bbdf9d-vnztl"] Nov 28 12:12:34 crc kubenswrapper[5030]: E1128 12:12:34.524170 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22820358-bfdc-4f0f-94fd-a31b149e42ff" containerName="extract" Nov 28 12:12:34 crc kubenswrapper[5030]: I1128 12:12:34.524186 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="22820358-bfdc-4f0f-94fd-a31b149e42ff" containerName="extract" Nov 28 12:12:34 crc kubenswrapper[5030]: E1128 12:12:34.524221 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22820358-bfdc-4f0f-94fd-a31b149e42ff" containerName="util" Nov 28 12:12:34 crc kubenswrapper[5030]: I1128 12:12:34.524229 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="22820358-bfdc-4f0f-94fd-a31b149e42ff" containerName="util" Nov 28 12:12:34 crc kubenswrapper[5030]: E1128 12:12:34.524238 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22820358-bfdc-4f0f-94fd-a31b149e42ff" containerName="pull" Nov 28 12:12:34 crc kubenswrapper[5030]: I1128 12:12:34.524245 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="22820358-bfdc-4f0f-94fd-a31b149e42ff" containerName="pull" Nov 28 12:12:34 crc kubenswrapper[5030]: I1128 12:12:34.524382 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="22820358-bfdc-4f0f-94fd-a31b149e42ff" containerName="extract" Nov 28 12:12:34 crc kubenswrapper[5030]: I1128 12:12:34.524953 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6d74bbdf9d-vnztl" Nov 28 12:12:34 crc kubenswrapper[5030]: I1128 12:12:34.528590 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-service-cert" Nov 28 12:12:34 crc kubenswrapper[5030]: I1128 12:12:34.528770 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-xkmxq" Nov 28 12:12:34 crc kubenswrapper[5030]: I1128 12:12:34.550589 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6d74bbdf9d-vnztl"] Nov 28 12:12:34 crc kubenswrapper[5030]: I1128 12:12:34.627779 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdgfr\" (UniqueName: \"kubernetes.io/projected/895c6168-d396-4e47-9d84-a5fa7e55eafa-kube-api-access-bdgfr\") pod \"glance-operator-controller-manager-6d74bbdf9d-vnztl\" (UID: \"895c6168-d396-4e47-9d84-a5fa7e55eafa\") " pod="openstack-operators/glance-operator-controller-manager-6d74bbdf9d-vnztl" Nov 28 12:12:34 crc kubenswrapper[5030]: I1128 12:12:34.627958 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/895c6168-d396-4e47-9d84-a5fa7e55eafa-webhook-cert\") pod \"glance-operator-controller-manager-6d74bbdf9d-vnztl\" (UID: \"895c6168-d396-4e47-9d84-a5fa7e55eafa\") " pod="openstack-operators/glance-operator-controller-manager-6d74bbdf9d-vnztl" Nov 28 12:12:34 crc kubenswrapper[5030]: I1128 12:12:34.628088 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/895c6168-d396-4e47-9d84-a5fa7e55eafa-apiservice-cert\") pod \"glance-operator-controller-manager-6d74bbdf9d-vnztl\" (UID: \"895c6168-d396-4e47-9d84-a5fa7e55eafa\") " pod="openstack-operators/glance-operator-controller-manager-6d74bbdf9d-vnztl" Nov 28 12:12:34 crc kubenswrapper[5030]: I1128 12:12:34.729368 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/895c6168-d396-4e47-9d84-a5fa7e55eafa-apiservice-cert\") pod \"glance-operator-controller-manager-6d74bbdf9d-vnztl\" (UID: \"895c6168-d396-4e47-9d84-a5fa7e55eafa\") " pod="openstack-operators/glance-operator-controller-manager-6d74bbdf9d-vnztl" Nov 28 12:12:34 crc kubenswrapper[5030]: I1128 12:12:34.729425 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdgfr\" (UniqueName: \"kubernetes.io/projected/895c6168-d396-4e47-9d84-a5fa7e55eafa-kube-api-access-bdgfr\") pod \"glance-operator-controller-manager-6d74bbdf9d-vnztl\" (UID: \"895c6168-d396-4e47-9d84-a5fa7e55eafa\") " pod="openstack-operators/glance-operator-controller-manager-6d74bbdf9d-vnztl" Nov 28 12:12:34 crc kubenswrapper[5030]: I1128 12:12:34.729498 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/895c6168-d396-4e47-9d84-a5fa7e55eafa-webhook-cert\") pod \"glance-operator-controller-manager-6d74bbdf9d-vnztl\" (UID: \"895c6168-d396-4e47-9d84-a5fa7e55eafa\") " pod="openstack-operators/glance-operator-controller-manager-6d74bbdf9d-vnztl" Nov 28 12:12:34 crc kubenswrapper[5030]: I1128 12:12:34.740183 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/895c6168-d396-4e47-9d84-a5fa7e55eafa-webhook-cert\") pod \"glance-operator-controller-manager-6d74bbdf9d-vnztl\" (UID: \"895c6168-d396-4e47-9d84-a5fa7e55eafa\") " pod="openstack-operators/glance-operator-controller-manager-6d74bbdf9d-vnztl" Nov 28 12:12:34 crc kubenswrapper[5030]: I1128 12:12:34.741618 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/895c6168-d396-4e47-9d84-a5fa7e55eafa-apiservice-cert\") pod \"glance-operator-controller-manager-6d74bbdf9d-vnztl\" (UID: \"895c6168-d396-4e47-9d84-a5fa7e55eafa\") " pod="openstack-operators/glance-operator-controller-manager-6d74bbdf9d-vnztl" Nov 28 12:12:34 crc kubenswrapper[5030]: I1128 12:12:34.755914 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdgfr\" (UniqueName: \"kubernetes.io/projected/895c6168-d396-4e47-9d84-a5fa7e55eafa-kube-api-access-bdgfr\") pod \"glance-operator-controller-manager-6d74bbdf9d-vnztl\" (UID: \"895c6168-d396-4e47-9d84-a5fa7e55eafa\") " pod="openstack-operators/glance-operator-controller-manager-6d74bbdf9d-vnztl" Nov 28 12:12:34 crc kubenswrapper[5030]: I1128 12:12:34.845259 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6d74bbdf9d-vnztl" Nov 28 12:12:35 crc kubenswrapper[5030]: I1128 12:12:35.172567 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6d74bbdf9d-vnztl"] Nov 28 12:12:36 crc kubenswrapper[5030]: I1128 12:12:36.156430 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6d74bbdf9d-vnztl" event={"ID":"895c6168-d396-4e47-9d84-a5fa7e55eafa","Type":"ContainerStarted","Data":"a8de8370d5f909fd4433cbadf0bc44415346312daf9faaa0456acee2f4cb13ca"} Nov 28 12:12:38 crc kubenswrapper[5030]: I1128 12:12:38.865904 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6d74bbdf9d-vnztl" event={"ID":"895c6168-d396-4e47-9d84-a5fa7e55eafa","Type":"ContainerStarted","Data":"58e59b0078fd67ed9f21b5c6d245dcdbbd552ec6c4d4deb7403b9f85b3562d5c"} Nov 28 12:12:38 crc kubenswrapper[5030]: I1128 12:12:38.867260 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6d74bbdf9d-vnztl" Nov 28 12:12:38 crc kubenswrapper[5030]: I1128 12:12:38.891463 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-6d74bbdf9d-vnztl" podStartSLOduration=3.314396297 podStartE2EDuration="4.891424419s" podCreationTimestamp="2025-11-28 12:12:34 +0000 UTC" firstStartedPulling="2025-11-28 12:12:35.182327906 +0000 UTC m=+1173.124070589" lastFinishedPulling="2025-11-28 12:12:36.759356028 +0000 UTC m=+1174.701098711" observedRunningTime="2025-11-28 12:12:38.882990311 +0000 UTC m=+1176.824732994" watchObservedRunningTime="2025-11-28 12:12:38.891424419 +0000 UTC m=+1176.833167122" Nov 28 12:12:44 crc kubenswrapper[5030]: I1128 12:12:44.851045 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-6d74bbdf9d-vnztl" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.148162 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstackclient"] Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.150350 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.155248 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-config" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.155600 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-scripts-9db6gc427h" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.156397 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"default-dockercfg-6252m" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.162195 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.162787 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"openstack-config-secret" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.214150 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-vwlpj"] Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.229998 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-vwlpj" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.253587 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-d9a5-account-create-update-7vnbm"] Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.255242 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-d9a5-account-create-update-7vnbm" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.258389 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.262626 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-vwlpj"] Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.269964 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-d9a5-account-create-update-7vnbm"] Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.348282 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6092acec-456c-4682-8567-f20d6022b818-openstack-config-secret\") pod \"openstackclient\" (UID: \"6092acec-456c-4682-8567-f20d6022b818\") " pod="glance-kuttl-tests/openstackclient" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.349084 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/544e8049-7f6a-400f-a4cc-2fd82beaed9d-operator-scripts\") pod \"glance-db-create-vwlpj\" (UID: \"544e8049-7f6a-400f-a4cc-2fd82beaed9d\") " pod="glance-kuttl-tests/glance-db-create-vwlpj" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.350129 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6092acec-456c-4682-8567-f20d6022b818-openstack-config\") pod \"openstackclient\" (UID: \"6092acec-456c-4682-8567-f20d6022b818\") " pod="glance-kuttl-tests/openstackclient" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.350795 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqtvs\" (UniqueName: \"kubernetes.io/projected/6092acec-456c-4682-8567-f20d6022b818-kube-api-access-nqtvs\") pod \"openstackclient\" (UID: \"6092acec-456c-4682-8567-f20d6022b818\") " pod="glance-kuttl-tests/openstackclient" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.350944 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78w9l\" (UniqueName: \"kubernetes.io/projected/544e8049-7f6a-400f-a4cc-2fd82beaed9d-kube-api-access-78w9l\") pod \"glance-db-create-vwlpj\" (UID: \"544e8049-7f6a-400f-a4cc-2fd82beaed9d\") " pod="glance-kuttl-tests/glance-db-create-vwlpj" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.351241 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/6092acec-456c-4682-8567-f20d6022b818-openstack-scripts\") pod \"openstackclient\" (UID: \"6092acec-456c-4682-8567-f20d6022b818\") " pod="glance-kuttl-tests/openstackclient" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.453930 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6092acec-456c-4682-8567-f20d6022b818-openstack-config-secret\") pod \"openstackclient\" (UID: \"6092acec-456c-4682-8567-f20d6022b818\") " pod="glance-kuttl-tests/openstackclient" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.453997 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/544e8049-7f6a-400f-a4cc-2fd82beaed9d-operator-scripts\") pod \"glance-db-create-vwlpj\" (UID: \"544e8049-7f6a-400f-a4cc-2fd82beaed9d\") " pod="glance-kuttl-tests/glance-db-create-vwlpj" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.454057 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6092acec-456c-4682-8567-f20d6022b818-openstack-config\") pod \"openstackclient\" (UID: \"6092acec-456c-4682-8567-f20d6022b818\") " pod="glance-kuttl-tests/openstackclient" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.454097 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twnq7\" (UniqueName: \"kubernetes.io/projected/c4ae2100-f0e3-45a8-8800-b78ccb909903-kube-api-access-twnq7\") pod \"glance-d9a5-account-create-update-7vnbm\" (UID: \"c4ae2100-f0e3-45a8-8800-b78ccb909903\") " pod="glance-kuttl-tests/glance-d9a5-account-create-update-7vnbm" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.454136 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqtvs\" (UniqueName: \"kubernetes.io/projected/6092acec-456c-4682-8567-f20d6022b818-kube-api-access-nqtvs\") pod \"openstackclient\" (UID: \"6092acec-456c-4682-8567-f20d6022b818\") " pod="glance-kuttl-tests/openstackclient" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.454204 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78w9l\" (UniqueName: \"kubernetes.io/projected/544e8049-7f6a-400f-a4cc-2fd82beaed9d-kube-api-access-78w9l\") pod \"glance-db-create-vwlpj\" (UID: \"544e8049-7f6a-400f-a4cc-2fd82beaed9d\") " pod="glance-kuttl-tests/glance-db-create-vwlpj" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.454326 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4ae2100-f0e3-45a8-8800-b78ccb909903-operator-scripts\") pod \"glance-d9a5-account-create-update-7vnbm\" (UID: \"c4ae2100-f0e3-45a8-8800-b78ccb909903\") " pod="glance-kuttl-tests/glance-d9a5-account-create-update-7vnbm" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.454364 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/6092acec-456c-4682-8567-f20d6022b818-openstack-scripts\") pod \"openstackclient\" (UID: \"6092acec-456c-4682-8567-f20d6022b818\") " pod="glance-kuttl-tests/openstackclient" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.455534 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6092acec-456c-4682-8567-f20d6022b818-openstack-config\") pod \"openstackclient\" (UID: \"6092acec-456c-4682-8567-f20d6022b818\") " pod="glance-kuttl-tests/openstackclient" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.455588 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/544e8049-7f6a-400f-a4cc-2fd82beaed9d-operator-scripts\") pod \"glance-db-create-vwlpj\" (UID: \"544e8049-7f6a-400f-a4cc-2fd82beaed9d\") " pod="glance-kuttl-tests/glance-db-create-vwlpj" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.455601 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/6092acec-456c-4682-8567-f20d6022b818-openstack-scripts\") pod \"openstackclient\" (UID: \"6092acec-456c-4682-8567-f20d6022b818\") " pod="glance-kuttl-tests/openstackclient" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.462377 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6092acec-456c-4682-8567-f20d6022b818-openstack-config-secret\") pod \"openstackclient\" (UID: \"6092acec-456c-4682-8567-f20d6022b818\") " pod="glance-kuttl-tests/openstackclient" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.479583 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqtvs\" (UniqueName: \"kubernetes.io/projected/6092acec-456c-4682-8567-f20d6022b818-kube-api-access-nqtvs\") pod \"openstackclient\" (UID: \"6092acec-456c-4682-8567-f20d6022b818\") " pod="glance-kuttl-tests/openstackclient" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.485080 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78w9l\" (UniqueName: \"kubernetes.io/projected/544e8049-7f6a-400f-a4cc-2fd82beaed9d-kube-api-access-78w9l\") pod \"glance-db-create-vwlpj\" (UID: \"544e8049-7f6a-400f-a4cc-2fd82beaed9d\") " pod="glance-kuttl-tests/glance-db-create-vwlpj" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.485891 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.555768 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4ae2100-f0e3-45a8-8800-b78ccb909903-operator-scripts\") pod \"glance-d9a5-account-create-update-7vnbm\" (UID: \"c4ae2100-f0e3-45a8-8800-b78ccb909903\") " pod="glance-kuttl-tests/glance-d9a5-account-create-update-7vnbm" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.555894 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twnq7\" (UniqueName: \"kubernetes.io/projected/c4ae2100-f0e3-45a8-8800-b78ccb909903-kube-api-access-twnq7\") pod \"glance-d9a5-account-create-update-7vnbm\" (UID: \"c4ae2100-f0e3-45a8-8800-b78ccb909903\") " pod="glance-kuttl-tests/glance-d9a5-account-create-update-7vnbm" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.557192 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4ae2100-f0e3-45a8-8800-b78ccb909903-operator-scripts\") pod \"glance-d9a5-account-create-update-7vnbm\" (UID: \"c4ae2100-f0e3-45a8-8800-b78ccb909903\") " pod="glance-kuttl-tests/glance-d9a5-account-create-update-7vnbm" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.563786 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-vwlpj" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.598598 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twnq7\" (UniqueName: \"kubernetes.io/projected/c4ae2100-f0e3-45a8-8800-b78ccb909903-kube-api-access-twnq7\") pod \"glance-d9a5-account-create-update-7vnbm\" (UID: \"c4ae2100-f0e3-45a8-8800-b78ccb909903\") " pod="glance-kuttl-tests/glance-d9a5-account-create-update-7vnbm" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.771093 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.874908 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-d9a5-account-create-update-7vnbm" Nov 28 12:12:50 crc kubenswrapper[5030]: I1128 12:12:50.993115 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"6092acec-456c-4682-8567-f20d6022b818","Type":"ContainerStarted","Data":"2b4b23433fe801fec6c3a7d5af960886ca1f1240f6571faf7ad6e30870c7b3d4"} Nov 28 12:12:51 crc kubenswrapper[5030]: I1128 12:12:51.033331 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-vwlpj"] Nov 28 12:12:51 crc kubenswrapper[5030]: I1128 12:12:51.307839 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-d9a5-account-create-update-7vnbm"] Nov 28 12:12:51 crc kubenswrapper[5030]: W1128 12:12:51.313661 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4ae2100_f0e3_45a8_8800_b78ccb909903.slice/crio-4963eee3a9291dba9df525484fd665fbf0e4d5ce507fd1e36c5bf40f4da1ce28 WatchSource:0}: Error finding container 4963eee3a9291dba9df525484fd665fbf0e4d5ce507fd1e36c5bf40f4da1ce28: Status 404 returned error can't find the container with id 4963eee3a9291dba9df525484fd665fbf0e4d5ce507fd1e36c5bf40f4da1ce28 Nov 28 12:12:52 crc kubenswrapper[5030]: I1128 12:12:52.005694 5030 generic.go:334] "Generic (PLEG): container finished" podID="c4ae2100-f0e3-45a8-8800-b78ccb909903" containerID="a2073e4ee8647538923d8e6e1752350724fb78bc31280b46be66e003aece4e32" exitCode=0 Nov 28 12:12:52 crc kubenswrapper[5030]: I1128 12:12:52.005807 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-d9a5-account-create-update-7vnbm" event={"ID":"c4ae2100-f0e3-45a8-8800-b78ccb909903","Type":"ContainerDied","Data":"a2073e4ee8647538923d8e6e1752350724fb78bc31280b46be66e003aece4e32"} Nov 28 12:12:52 crc kubenswrapper[5030]: I1128 12:12:52.006270 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-d9a5-account-create-update-7vnbm" event={"ID":"c4ae2100-f0e3-45a8-8800-b78ccb909903","Type":"ContainerStarted","Data":"4963eee3a9291dba9df525484fd665fbf0e4d5ce507fd1e36c5bf40f4da1ce28"} Nov 28 12:12:52 crc kubenswrapper[5030]: I1128 12:12:52.008412 5030 generic.go:334] "Generic (PLEG): container finished" podID="544e8049-7f6a-400f-a4cc-2fd82beaed9d" containerID="5672d8f9ff3ac798cceacaab3d7180f209fcdbe28b413f1a650e33f582de3535" exitCode=0 Nov 28 12:12:52 crc kubenswrapper[5030]: I1128 12:12:52.008569 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-vwlpj" event={"ID":"544e8049-7f6a-400f-a4cc-2fd82beaed9d","Type":"ContainerDied","Data":"5672d8f9ff3ac798cceacaab3d7180f209fcdbe28b413f1a650e33f582de3535"} Nov 28 12:12:52 crc kubenswrapper[5030]: I1128 12:12:52.008744 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-vwlpj" event={"ID":"544e8049-7f6a-400f-a4cc-2fd82beaed9d","Type":"ContainerStarted","Data":"afc0b2194725bf2d55ec8a53fb995d57853b1e71a580edb043baf63dfdc25db5"} Nov 28 12:12:53 crc kubenswrapper[5030]: I1128 12:12:53.446170 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-vwlpj" Nov 28 12:12:53 crc kubenswrapper[5030]: I1128 12:12:53.453379 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-d9a5-account-create-update-7vnbm" Nov 28 12:12:53 crc kubenswrapper[5030]: I1128 12:12:53.528976 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4ae2100-f0e3-45a8-8800-b78ccb909903-operator-scripts\") pod \"c4ae2100-f0e3-45a8-8800-b78ccb909903\" (UID: \"c4ae2100-f0e3-45a8-8800-b78ccb909903\") " Nov 28 12:12:53 crc kubenswrapper[5030]: I1128 12:12:53.529164 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twnq7\" (UniqueName: \"kubernetes.io/projected/c4ae2100-f0e3-45a8-8800-b78ccb909903-kube-api-access-twnq7\") pod \"c4ae2100-f0e3-45a8-8800-b78ccb909903\" (UID: \"c4ae2100-f0e3-45a8-8800-b78ccb909903\") " Nov 28 12:12:53 crc kubenswrapper[5030]: I1128 12:12:53.529263 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78w9l\" (UniqueName: \"kubernetes.io/projected/544e8049-7f6a-400f-a4cc-2fd82beaed9d-kube-api-access-78w9l\") pod \"544e8049-7f6a-400f-a4cc-2fd82beaed9d\" (UID: \"544e8049-7f6a-400f-a4cc-2fd82beaed9d\") " Nov 28 12:12:53 crc kubenswrapper[5030]: I1128 12:12:53.529293 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/544e8049-7f6a-400f-a4cc-2fd82beaed9d-operator-scripts\") pod \"544e8049-7f6a-400f-a4cc-2fd82beaed9d\" (UID: \"544e8049-7f6a-400f-a4cc-2fd82beaed9d\") " Nov 28 12:12:53 crc kubenswrapper[5030]: I1128 12:12:53.530810 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4ae2100-f0e3-45a8-8800-b78ccb909903-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4ae2100-f0e3-45a8-8800-b78ccb909903" (UID: "c4ae2100-f0e3-45a8-8800-b78ccb909903"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:12:53 crc kubenswrapper[5030]: I1128 12:12:53.531345 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/544e8049-7f6a-400f-a4cc-2fd82beaed9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "544e8049-7f6a-400f-a4cc-2fd82beaed9d" (UID: "544e8049-7f6a-400f-a4cc-2fd82beaed9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:12:53 crc kubenswrapper[5030]: I1128 12:12:53.536441 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/544e8049-7f6a-400f-a4cc-2fd82beaed9d-kube-api-access-78w9l" (OuterVolumeSpecName: "kube-api-access-78w9l") pod "544e8049-7f6a-400f-a4cc-2fd82beaed9d" (UID: "544e8049-7f6a-400f-a4cc-2fd82beaed9d"). InnerVolumeSpecName "kube-api-access-78w9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:12:53 crc kubenswrapper[5030]: I1128 12:12:53.546847 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4ae2100-f0e3-45a8-8800-b78ccb909903-kube-api-access-twnq7" (OuterVolumeSpecName: "kube-api-access-twnq7") pod "c4ae2100-f0e3-45a8-8800-b78ccb909903" (UID: "c4ae2100-f0e3-45a8-8800-b78ccb909903"). InnerVolumeSpecName "kube-api-access-twnq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:12:53 crc kubenswrapper[5030]: I1128 12:12:53.631676 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twnq7\" (UniqueName: \"kubernetes.io/projected/c4ae2100-f0e3-45a8-8800-b78ccb909903-kube-api-access-twnq7\") on node \"crc\" DevicePath \"\"" Nov 28 12:12:53 crc kubenswrapper[5030]: I1128 12:12:53.631745 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78w9l\" (UniqueName: \"kubernetes.io/projected/544e8049-7f6a-400f-a4cc-2fd82beaed9d-kube-api-access-78w9l\") on node \"crc\" DevicePath \"\"" Nov 28 12:12:53 crc kubenswrapper[5030]: I1128 12:12:53.631761 5030 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/544e8049-7f6a-400f-a4cc-2fd82beaed9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:12:53 crc kubenswrapper[5030]: I1128 12:12:53.631774 5030 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4ae2100-f0e3-45a8-8800-b78ccb909903-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:12:54 crc kubenswrapper[5030]: I1128 12:12:54.035212 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-d9a5-account-create-update-7vnbm" event={"ID":"c4ae2100-f0e3-45a8-8800-b78ccb909903","Type":"ContainerDied","Data":"4963eee3a9291dba9df525484fd665fbf0e4d5ce507fd1e36c5bf40f4da1ce28"} Nov 28 12:12:54 crc kubenswrapper[5030]: I1128 12:12:54.035263 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4963eee3a9291dba9df525484fd665fbf0e4d5ce507fd1e36c5bf40f4da1ce28" Nov 28 12:12:54 crc kubenswrapper[5030]: I1128 12:12:54.035320 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-d9a5-account-create-update-7vnbm" Nov 28 12:12:54 crc kubenswrapper[5030]: I1128 12:12:54.047040 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-vwlpj" event={"ID":"544e8049-7f6a-400f-a4cc-2fd82beaed9d","Type":"ContainerDied","Data":"afc0b2194725bf2d55ec8a53fb995d57853b1e71a580edb043baf63dfdc25db5"} Nov 28 12:12:54 crc kubenswrapper[5030]: I1128 12:12:54.047089 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afc0b2194725bf2d55ec8a53fb995d57853b1e71a580edb043baf63dfdc25db5" Nov 28 12:12:54 crc kubenswrapper[5030]: I1128 12:12:54.047151 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-vwlpj" Nov 28 12:12:55 crc kubenswrapper[5030]: I1128 12:12:55.426800 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-jwx77"] Nov 28 12:12:55 crc kubenswrapper[5030]: E1128 12:12:55.427104 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544e8049-7f6a-400f-a4cc-2fd82beaed9d" containerName="mariadb-database-create" Nov 28 12:12:55 crc kubenswrapper[5030]: I1128 12:12:55.427126 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="544e8049-7f6a-400f-a4cc-2fd82beaed9d" containerName="mariadb-database-create" Nov 28 12:12:55 crc kubenswrapper[5030]: E1128 12:12:55.427136 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ae2100-f0e3-45a8-8800-b78ccb909903" containerName="mariadb-account-create-update" Nov 28 12:12:55 crc kubenswrapper[5030]: I1128 12:12:55.427142 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ae2100-f0e3-45a8-8800-b78ccb909903" containerName="mariadb-account-create-update" Nov 28 12:12:55 crc kubenswrapper[5030]: I1128 12:12:55.428024 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="544e8049-7f6a-400f-a4cc-2fd82beaed9d" containerName="mariadb-database-create" Nov 28 12:12:55 crc kubenswrapper[5030]: I1128 12:12:55.428062 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4ae2100-f0e3-45a8-8800-b78ccb909903" containerName="mariadb-account-create-update" Nov 28 12:12:55 crc kubenswrapper[5030]: I1128 12:12:55.429596 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-jwx77" Nov 28 12:12:55 crc kubenswrapper[5030]: I1128 12:12:55.437966 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Nov 28 12:12:55 crc kubenswrapper[5030]: I1128 12:12:55.439687 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-jwx77"] Nov 28 12:12:55 crc kubenswrapper[5030]: I1128 12:12:55.444820 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-ft8ps" Nov 28 12:12:55 crc kubenswrapper[5030]: I1128 12:12:55.486919 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d87ac6bc-7088-417e-a99b-1a28cc27dacb-db-sync-config-data\") pod \"glance-db-sync-jwx77\" (UID: \"d87ac6bc-7088-417e-a99b-1a28cc27dacb\") " pod="glance-kuttl-tests/glance-db-sync-jwx77" Nov 28 12:12:55 crc kubenswrapper[5030]: I1128 12:12:55.486978 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87ac6bc-7088-417e-a99b-1a28cc27dacb-config-data\") pod \"glance-db-sync-jwx77\" (UID: \"d87ac6bc-7088-417e-a99b-1a28cc27dacb\") " pod="glance-kuttl-tests/glance-db-sync-jwx77" Nov 28 12:12:55 crc kubenswrapper[5030]: I1128 12:12:55.487007 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stm7t\" (UniqueName: \"kubernetes.io/projected/d87ac6bc-7088-417e-a99b-1a28cc27dacb-kube-api-access-stm7t\") pod \"glance-db-sync-jwx77\" (UID: \"d87ac6bc-7088-417e-a99b-1a28cc27dacb\") " pod="glance-kuttl-tests/glance-db-sync-jwx77" Nov 28 12:12:55 crc kubenswrapper[5030]: I1128 12:12:55.588795 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87ac6bc-7088-417e-a99b-1a28cc27dacb-config-data\") pod \"glance-db-sync-jwx77\" (UID: \"d87ac6bc-7088-417e-a99b-1a28cc27dacb\") " pod="glance-kuttl-tests/glance-db-sync-jwx77" Nov 28 12:12:55 crc kubenswrapper[5030]: I1128 12:12:55.589203 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stm7t\" (UniqueName: \"kubernetes.io/projected/d87ac6bc-7088-417e-a99b-1a28cc27dacb-kube-api-access-stm7t\") pod \"glance-db-sync-jwx77\" (UID: \"d87ac6bc-7088-417e-a99b-1a28cc27dacb\") " pod="glance-kuttl-tests/glance-db-sync-jwx77" Nov 28 12:12:55 crc kubenswrapper[5030]: I1128 12:12:55.589294 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d87ac6bc-7088-417e-a99b-1a28cc27dacb-db-sync-config-data\") pod \"glance-db-sync-jwx77\" (UID: \"d87ac6bc-7088-417e-a99b-1a28cc27dacb\") " pod="glance-kuttl-tests/glance-db-sync-jwx77" Nov 28 12:12:55 crc kubenswrapper[5030]: I1128 12:12:55.599723 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87ac6bc-7088-417e-a99b-1a28cc27dacb-config-data\") pod \"glance-db-sync-jwx77\" (UID: \"d87ac6bc-7088-417e-a99b-1a28cc27dacb\") " pod="glance-kuttl-tests/glance-db-sync-jwx77" Nov 28 12:12:55 crc kubenswrapper[5030]: I1128 12:12:55.607511 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d87ac6bc-7088-417e-a99b-1a28cc27dacb-db-sync-config-data\") pod \"glance-db-sync-jwx77\" (UID: \"d87ac6bc-7088-417e-a99b-1a28cc27dacb\") " pod="glance-kuttl-tests/glance-db-sync-jwx77" Nov 28 12:12:55 crc kubenswrapper[5030]: I1128 12:12:55.607832 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stm7t\" (UniqueName: \"kubernetes.io/projected/d87ac6bc-7088-417e-a99b-1a28cc27dacb-kube-api-access-stm7t\") pod \"glance-db-sync-jwx77\" (UID: \"d87ac6bc-7088-417e-a99b-1a28cc27dacb\") " pod="glance-kuttl-tests/glance-db-sync-jwx77" Nov 28 12:12:55 crc kubenswrapper[5030]: I1128 12:12:55.754162 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-jwx77" Nov 28 12:12:59 crc kubenswrapper[5030]: W1128 12:12:59.498003 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd87ac6bc_7088_417e_a99b_1a28cc27dacb.slice/crio-ec7591715243748acb638db93e320e27b251fc1534f3d4c8d3bfda89f91d0607 WatchSource:0}: Error finding container ec7591715243748acb638db93e320e27b251fc1534f3d4c8d3bfda89f91d0607: Status 404 returned error can't find the container with id ec7591715243748acb638db93e320e27b251fc1534f3d4c8d3bfda89f91d0607 Nov 28 12:12:59 crc kubenswrapper[5030]: I1128 12:12:59.515288 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-jwx77"] Nov 28 12:13:00 crc kubenswrapper[5030]: I1128 12:13:00.111702 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-jwx77" event={"ID":"d87ac6bc-7088-417e-a99b-1a28cc27dacb","Type":"ContainerStarted","Data":"ec7591715243748acb638db93e320e27b251fc1534f3d4c8d3bfda89f91d0607"} Nov 28 12:13:00 crc kubenswrapper[5030]: I1128 12:13:00.114289 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"6092acec-456c-4682-8567-f20d6022b818","Type":"ContainerStarted","Data":"55062c4bae9807344c43bbac748794512bbfccf81081afb002cbc399d6a0fea0"} Nov 28 12:13:00 crc kubenswrapper[5030]: I1128 12:13:00.138530 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstackclient" podStartSLOduration=1.858134002 podStartE2EDuration="10.138509078s" podCreationTimestamp="2025-11-28 12:12:50 +0000 UTC" firstStartedPulling="2025-11-28 12:12:50.783986957 +0000 UTC m=+1188.725729640" lastFinishedPulling="2025-11-28 12:12:59.064362033 +0000 UTC m=+1197.006104716" observedRunningTime="2025-11-28 12:13:00.132942078 +0000 UTC m=+1198.074684781" watchObservedRunningTime="2025-11-28 12:13:00.138509078 +0000 UTC m=+1198.080251761" Nov 28 12:13:17 crc kubenswrapper[5030]: I1128 12:13:17.277011 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-jwx77" event={"ID":"d87ac6bc-7088-417e-a99b-1a28cc27dacb","Type":"ContainerStarted","Data":"29282ecd553125ddf2a32ee18e61e61ec54e77a99eee8b11f63bf5fd4b3ab22b"} Nov 28 12:13:17 crc kubenswrapper[5030]: I1128 12:13:17.315022 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-jwx77" podStartSLOduration=5.339932664 podStartE2EDuration="22.315001139s" podCreationTimestamp="2025-11-28 12:12:55 +0000 UTC" firstStartedPulling="2025-11-28 12:12:59.500861116 +0000 UTC m=+1197.442603809" lastFinishedPulling="2025-11-28 12:13:16.475929561 +0000 UTC m=+1214.417672284" observedRunningTime="2025-11-28 12:13:17.31206426 +0000 UTC m=+1215.253806963" watchObservedRunningTime="2025-11-28 12:13:17.315001139 +0000 UTC m=+1215.256743832" Nov 28 12:13:24 crc kubenswrapper[5030]: I1128 12:13:24.346787 5030 generic.go:334] "Generic (PLEG): container finished" podID="d87ac6bc-7088-417e-a99b-1a28cc27dacb" containerID="29282ecd553125ddf2a32ee18e61e61ec54e77a99eee8b11f63bf5fd4b3ab22b" exitCode=0 Nov 28 12:13:24 crc kubenswrapper[5030]: I1128 12:13:24.346881 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-jwx77" event={"ID":"d87ac6bc-7088-417e-a99b-1a28cc27dacb","Type":"ContainerDied","Data":"29282ecd553125ddf2a32ee18e61e61ec54e77a99eee8b11f63bf5fd4b3ab22b"} Nov 28 12:13:25 crc kubenswrapper[5030]: I1128 12:13:25.801449 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-jwx77" Nov 28 12:13:25 crc kubenswrapper[5030]: I1128 12:13:25.940083 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87ac6bc-7088-417e-a99b-1a28cc27dacb-config-data\") pod \"d87ac6bc-7088-417e-a99b-1a28cc27dacb\" (UID: \"d87ac6bc-7088-417e-a99b-1a28cc27dacb\") " Nov 28 12:13:25 crc kubenswrapper[5030]: I1128 12:13:25.940154 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stm7t\" (UniqueName: \"kubernetes.io/projected/d87ac6bc-7088-417e-a99b-1a28cc27dacb-kube-api-access-stm7t\") pod \"d87ac6bc-7088-417e-a99b-1a28cc27dacb\" (UID: \"d87ac6bc-7088-417e-a99b-1a28cc27dacb\") " Nov 28 12:13:25 crc kubenswrapper[5030]: I1128 12:13:25.940277 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d87ac6bc-7088-417e-a99b-1a28cc27dacb-db-sync-config-data\") pod \"d87ac6bc-7088-417e-a99b-1a28cc27dacb\" (UID: \"d87ac6bc-7088-417e-a99b-1a28cc27dacb\") " Nov 28 12:13:25 crc kubenswrapper[5030]: I1128 12:13:25.948231 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d87ac6bc-7088-417e-a99b-1a28cc27dacb-kube-api-access-stm7t" (OuterVolumeSpecName: "kube-api-access-stm7t") pod "d87ac6bc-7088-417e-a99b-1a28cc27dacb" (UID: "d87ac6bc-7088-417e-a99b-1a28cc27dacb"). InnerVolumeSpecName "kube-api-access-stm7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:13:25 crc kubenswrapper[5030]: I1128 12:13:25.949301 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87ac6bc-7088-417e-a99b-1a28cc27dacb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d87ac6bc-7088-417e-a99b-1a28cc27dacb" (UID: "d87ac6bc-7088-417e-a99b-1a28cc27dacb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:13:25 crc kubenswrapper[5030]: I1128 12:13:25.978358 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87ac6bc-7088-417e-a99b-1a28cc27dacb-config-data" (OuterVolumeSpecName: "config-data") pod "d87ac6bc-7088-417e-a99b-1a28cc27dacb" (UID: "d87ac6bc-7088-417e-a99b-1a28cc27dacb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:13:26 crc kubenswrapper[5030]: I1128 12:13:26.042379 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87ac6bc-7088-417e-a99b-1a28cc27dacb-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:26 crc kubenswrapper[5030]: I1128 12:13:26.042427 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stm7t\" (UniqueName: \"kubernetes.io/projected/d87ac6bc-7088-417e-a99b-1a28cc27dacb-kube-api-access-stm7t\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:26 crc kubenswrapper[5030]: I1128 12:13:26.042441 5030 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d87ac6bc-7088-417e-a99b-1a28cc27dacb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:26 crc kubenswrapper[5030]: I1128 12:13:26.369346 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-jwx77" event={"ID":"d87ac6bc-7088-417e-a99b-1a28cc27dacb","Type":"ContainerDied","Data":"ec7591715243748acb638db93e320e27b251fc1534f3d4c8d3bfda89f91d0607"} Nov 28 12:13:26 crc kubenswrapper[5030]: I1128 12:13:26.369403 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec7591715243748acb638db93e320e27b251fc1534f3d4c8d3bfda89f91d0607" Nov 28 12:13:26 crc kubenswrapper[5030]: I1128 12:13:26.369574 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-jwx77" Nov 28 12:13:27 crc kubenswrapper[5030]: I1128 12:13:27.952563 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:13:27 crc kubenswrapper[5030]: E1128 12:13:27.953365 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87ac6bc-7088-417e-a99b-1a28cc27dacb" containerName="glance-db-sync" Nov 28 12:13:27 crc kubenswrapper[5030]: I1128 12:13:27.953381 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87ac6bc-7088-417e-a99b-1a28cc27dacb" containerName="glance-db-sync" Nov 28 12:13:27 crc kubenswrapper[5030]: I1128 12:13:27.953559 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="d87ac6bc-7088-417e-a99b-1a28cc27dacb" containerName="glance-db-sync" Nov 28 12:13:27 crc kubenswrapper[5030]: I1128 12:13:27.954435 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:27 crc kubenswrapper[5030]: I1128 12:13:27.957302 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Nov 28 12:13:27 crc kubenswrapper[5030]: I1128 12:13:27.957375 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-single-config-data" Nov 28 12:13:27 crc kubenswrapper[5030]: I1128 12:13:27.957497 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-ft8ps" Nov 28 12:13:27 crc kubenswrapper[5030]: I1128 12:13:27.975148 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.001800 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.003354 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.037566 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.099173 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2skbs\" (UniqueName: \"kubernetes.io/projected/05f53d98-9ca9-49bf-a8fe-1898dd42106b-kube-api-access-2skbs\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.099228 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05f53d98-9ca9-49bf-a8fe-1898dd42106b-config-data\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.099264 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05f53d98-9ca9-49bf-a8fe-1898dd42106b-scripts\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.099292 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-etc-nvme\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.099318 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-lib-modules\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.099343 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05f53d98-9ca9-49bf-a8fe-1898dd42106b-logs\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.099410 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.099433 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.099458 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.099502 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.099525 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-sys\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.099548 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-dev\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.099583 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-run\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.099601 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05f53d98-9ca9-49bf-a8fe-1898dd42106b-httpd-run\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.149291 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 28 12:13:28 crc kubenswrapper[5030]: E1128 12:13:28.150240 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config-data dev etc-iscsi etc-nvme glance glance-cache httpd-run kube-api-access-vs5dw lib-modules logs run scripts sys var-locks-brick], unattached volumes=[], failed to process volumes=[]: context canceled" pod="glance-kuttl-tests/glance-default-single-1" podUID="d07c861d-9df8-4d2e-8b81-1c34f0dce788" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.200763 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05f53d98-9ca9-49bf-a8fe-1898dd42106b-logs\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.200820 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.200846 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.200874 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.200902 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.200920 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-run\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.200934 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-etc-nvme\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.200957 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-sys\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.200976 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d07c861d-9df8-4d2e-8b81-1c34f0dce788-scripts\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.200997 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.201019 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-sys\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.201043 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-dev\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.201064 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.201080 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.201101 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs5dw\" (UniqueName: \"kubernetes.io/projected/d07c861d-9df8-4d2e-8b81-1c34f0dce788-kube-api-access-vs5dw\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.201122 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-run\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.201140 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05f53d98-9ca9-49bf-a8fe-1898dd42106b-httpd-run\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.201162 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-lib-modules\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.201182 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d07c861d-9df8-4d2e-8b81-1c34f0dce788-logs\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.201209 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2skbs\" (UniqueName: \"kubernetes.io/projected/05f53d98-9ca9-49bf-a8fe-1898dd42106b-kube-api-access-2skbs\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.201229 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05f53d98-9ca9-49bf-a8fe-1898dd42106b-config-data\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.201246 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05f53d98-9ca9-49bf-a8fe-1898dd42106b-scripts\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.201270 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d07c861d-9df8-4d2e-8b81-1c34f0dce788-httpd-run\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.201294 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-etc-nvme\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.201311 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-dev\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.201332 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.201351 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-lib-modules\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.201369 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d07c861d-9df8-4d2e-8b81-1c34f0dce788-config-data\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.201853 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05f53d98-9ca9-49bf-a8fe-1898dd42106b-logs\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.202135 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") device mount path \"/mnt/openstack/pv08\"" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.213111 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") device mount path \"/mnt/openstack/pv12\"" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.213738 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.213830 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-dev\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.213868 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.213896 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-sys\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.214503 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-lib-modules\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.214617 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-etc-nvme\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.214885 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05f53d98-9ca9-49bf-a8fe-1898dd42106b-httpd-run\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.214890 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-run\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.222588 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05f53d98-9ca9-49bf-a8fe-1898dd42106b-scripts\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.223043 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05f53d98-9ca9-49bf-a8fe-1898dd42106b-config-data\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.228666 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.238648 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.252221 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2skbs\" (UniqueName: \"kubernetes.io/projected/05f53d98-9ca9-49bf-a8fe-1898dd42106b-kube-api-access-2skbs\") pod \"glance-default-single-0\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.274438 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.302801 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d07c861d-9df8-4d2e-8b81-1c34f0dce788-httpd-run\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.303105 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-dev\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.303136 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.303159 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d07c861d-9df8-4d2e-8b81-1c34f0dce788-config-data\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.303194 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.303229 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-etc-nvme\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.303247 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-run\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.303268 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-sys\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.303287 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d07c861d-9df8-4d2e-8b81-1c34f0dce788-scripts\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.303328 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.303346 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.303365 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs5dw\" (UniqueName: \"kubernetes.io/projected/d07c861d-9df8-4d2e-8b81-1c34f0dce788-kube-api-access-vs5dw\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.303384 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-lib-modules\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.303401 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d07c861d-9df8-4d2e-8b81-1c34f0dce788-logs\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.303974 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d07c861d-9df8-4d2e-8b81-1c34f0dce788-logs\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.304184 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d07c861d-9df8-4d2e-8b81-1c34f0dce788-httpd-run\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.304219 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-dev\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.304380 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") device mount path \"/mnt/openstack/pv09\"" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.304607 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-sys\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.308822 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.309339 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.309379 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-run\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.309665 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d07c861d-9df8-4d2e-8b81-1c34f0dce788-config-data\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.309718 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") device mount path \"/mnt/openstack/pv13\"" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.309799 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-etc-nvme\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.309745 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-lib-modules\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.324795 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d07c861d-9df8-4d2e-8b81-1c34f0dce788-scripts\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.340991 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs5dw\" (UniqueName: \"kubernetes.io/projected/d07c861d-9df8-4d2e-8b81-1c34f0dce788-kube-api-access-vs5dw\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.356700 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.356746 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-1\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.385078 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.402703 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.507649 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-var-locks-brick\") pod \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.507750 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d07c861d-9df8-4d2e-8b81-1c34f0dce788-httpd-run\") pod \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.507805 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs5dw\" (UniqueName: \"kubernetes.io/projected/d07c861d-9df8-4d2e-8b81-1c34f0dce788-kube-api-access-vs5dw\") pod \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.507844 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-etc-nvme\") pod \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.507821 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "d07c861d-9df8-4d2e-8b81-1c34f0dce788" (UID: "d07c861d-9df8-4d2e-8b81-1c34f0dce788"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.507867 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-run\") pod \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.507886 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.507923 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-dev\") pod \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.507951 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-sys\") pod \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.507970 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-lib-modules\") pod \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.508009 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d07c861d-9df8-4d2e-8b81-1c34f0dce788-scripts\") pod \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.508053 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d07c861d-9df8-4d2e-8b81-1c34f0dce788-config-data\") pod \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.508115 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-etc-iscsi\") pod \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.508137 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d07c861d-9df8-4d2e-8b81-1c34f0dce788-logs\") pod \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.508168 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\" (UID: \"d07c861d-9df8-4d2e-8b81-1c34f0dce788\") " Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.508537 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d07c861d-9df8-4d2e-8b81-1c34f0dce788-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d07c861d-9df8-4d2e-8b81-1c34f0dce788" (UID: "d07c861d-9df8-4d2e-8b81-1c34f0dce788"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.508574 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-sys" (OuterVolumeSpecName: "sys") pod "d07c861d-9df8-4d2e-8b81-1c34f0dce788" (UID: "d07c861d-9df8-4d2e-8b81-1c34f0dce788"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.508589 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "d07c861d-9df8-4d2e-8b81-1c34f0dce788" (UID: "d07c861d-9df8-4d2e-8b81-1c34f0dce788"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.508606 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-run" (OuterVolumeSpecName: "run") pod "d07c861d-9df8-4d2e-8b81-1c34f0dce788" (UID: "d07c861d-9df8-4d2e-8b81-1c34f0dce788"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.508889 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-dev" (OuterVolumeSpecName: "dev") pod "d07c861d-9df8-4d2e-8b81-1c34f0dce788" (UID: "d07c861d-9df8-4d2e-8b81-1c34f0dce788"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.508970 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d07c861d-9df8-4d2e-8b81-1c34f0dce788" (UID: "d07c861d-9df8-4d2e-8b81-1c34f0dce788"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.509142 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "d07c861d-9df8-4d2e-8b81-1c34f0dce788" (UID: "d07c861d-9df8-4d2e-8b81-1c34f0dce788"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.509396 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d07c861d-9df8-4d2e-8b81-1c34f0dce788-logs" (OuterVolumeSpecName: "logs") pod "d07c861d-9df8-4d2e-8b81-1c34f0dce788" (UID: "d07c861d-9df8-4d2e-8b81-1c34f0dce788"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.509799 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.509823 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.509833 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.509861 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.509870 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d07c861d-9df8-4d2e-8b81-1c34f0dce788-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.509879 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.509888 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d07c861d-9df8-4d2e-8b81-1c34f0dce788-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.509896 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.509905 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d07c861d-9df8-4d2e-8b81-1c34f0dce788-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.514563 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d07c861d-9df8-4d2e-8b81-1c34f0dce788-kube-api-access-vs5dw" (OuterVolumeSpecName: "kube-api-access-vs5dw") pod "d07c861d-9df8-4d2e-8b81-1c34f0dce788" (UID: "d07c861d-9df8-4d2e-8b81-1c34f0dce788"). InnerVolumeSpecName "kube-api-access-vs5dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.514566 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07c861d-9df8-4d2e-8b81-1c34f0dce788-config-data" (OuterVolumeSpecName: "config-data") pod "d07c861d-9df8-4d2e-8b81-1c34f0dce788" (UID: "d07c861d-9df8-4d2e-8b81-1c34f0dce788"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.515627 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage13-crc" (OuterVolumeSpecName: "glance-cache") pod "d07c861d-9df8-4d2e-8b81-1c34f0dce788" (UID: "d07c861d-9df8-4d2e-8b81-1c34f0dce788"). InnerVolumeSpecName "local-storage13-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.515810 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07c861d-9df8-4d2e-8b81-1c34f0dce788-scripts" (OuterVolumeSpecName: "scripts") pod "d07c861d-9df8-4d2e-8b81-1c34f0dce788" (UID: "d07c861d-9df8-4d2e-8b81-1c34f0dce788"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.517069 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "d07c861d-9df8-4d2e-8b81-1c34f0dce788" (UID: "d07c861d-9df8-4d2e-8b81-1c34f0dce788"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.612047 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.624826 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs5dw\" (UniqueName: \"kubernetes.io/projected/d07c861d-9df8-4d2e-8b81-1c34f0dce788-kube-api-access-vs5dw\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.624875 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" " Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.624886 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d07c861d-9df8-4d2e-8b81-1c34f0dce788-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.624897 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d07c861d-9df8-4d2e-8b81-1c34f0dce788-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.624770 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.639766 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage13-crc" (UniqueName: "kubernetes.io/local-volume/local-storage13-crc") on node "crc" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.726792 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.726828 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:28 crc kubenswrapper[5030]: I1128 12:13:28.777264 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.398212 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.398254 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"05f53d98-9ca9-49bf-a8fe-1898dd42106b","Type":"ContainerStarted","Data":"da829359f9a50fc7d353cc7fce375f30e3e7d3587988e8ac651592d3d983d1b0"} Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.398982 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"05f53d98-9ca9-49bf-a8fe-1898dd42106b","Type":"ContainerStarted","Data":"82abfe096ab1cf6a70f30fc137a6abe82ce1555fe278e4a3b800846379e9830d"} Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.399002 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"05f53d98-9ca9-49bf-a8fe-1898dd42106b","Type":"ContainerStarted","Data":"560131195d951dcfa560b8d5a727d6e1a2be71ece8eed88c4bee5492d5256518"} Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.471438 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=3.471412323 podStartE2EDuration="3.471412323s" podCreationTimestamp="2025-11-28 12:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:13:29.436589731 +0000 UTC m=+1227.378332454" watchObservedRunningTime="2025-11-28 12:13:29.471412323 +0000 UTC m=+1227.413155026" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.484066 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.526629 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.542954 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.548743 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.552511 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.648692 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.648771 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-lib-modules\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.648819 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-dev\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.648860 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.648914 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-etc-nvme\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.648951 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-sys\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.648977 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.649005 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-config-data\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.649020 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-logs\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.649047 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-scripts\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.649239 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-httpd-run\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.649327 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-run\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.649357 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5nzw\" (UniqueName: \"kubernetes.io/projected/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-kube-api-access-q5nzw\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.649509 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.751384 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.751506 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-lib-modules\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.751561 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-dev\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.751610 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.751655 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-etc-nvme\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.751705 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-sys\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.751747 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.751733 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-lib-modules\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.751810 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-config-data\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.751841 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-logs\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.751848 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-etc-nvme\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.751843 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.751896 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-sys\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.751735 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-dev\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.751896 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-scripts\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.752135 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") device mount path \"/mnt/openstack/pv09\"" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.752150 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-httpd-run\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.752217 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-run\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.752257 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5nzw\" (UniqueName: \"kubernetes.io/projected/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-kube-api-access-q5nzw\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.752303 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.752148 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") device mount path \"/mnt/openstack/pv13\"" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.752437 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-run\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.752425 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.753639 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-logs\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.753839 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-httpd-run\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.762514 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-scripts\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.764531 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-config-data\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.774485 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5nzw\" (UniqueName: \"kubernetes.io/projected/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-kube-api-access-q5nzw\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.795563 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.809139 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-1\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:29 crc kubenswrapper[5030]: I1128 12:13:29.877759 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:30 crc kubenswrapper[5030]: I1128 12:13:30.230718 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 28 12:13:30 crc kubenswrapper[5030]: I1128 12:13:30.407133 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d07c861d-9df8-4d2e-8b81-1c34f0dce788" path="/var/lib/kubelet/pods/d07c861d-9df8-4d2e-8b81-1c34f0dce788/volumes" Nov 28 12:13:30 crc kubenswrapper[5030]: I1128 12:13:30.412344 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a","Type":"ContainerStarted","Data":"e7457502763f3f63d223c6cb952deb48f63ddadda5a3b2fbd5e9de42a7883d15"} Nov 28 12:13:31 crc kubenswrapper[5030]: I1128 12:13:31.425253 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a","Type":"ContainerStarted","Data":"c4b8b16ea80c2ed9f1499efa50fdcd8b6ea7b0ff63ec53c733209f0789ed7ed0"} Nov 28 12:13:31 crc kubenswrapper[5030]: I1128 12:13:31.427327 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a","Type":"ContainerStarted","Data":"12e6294b1d8f755318350006f2a70097505cf0dd5f2e2bd884f6dc1bf06f7e3a"} Nov 28 12:13:31 crc kubenswrapper[5030]: I1128 12:13:31.455430 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-1" podStartSLOduration=2.455406119 podStartE2EDuration="2.455406119s" podCreationTimestamp="2025-11-28 12:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:13:31.44953594 +0000 UTC m=+1229.391278623" watchObservedRunningTime="2025-11-28 12:13:31.455406119 +0000 UTC m=+1229.397148822" Nov 28 12:13:38 crc kubenswrapper[5030]: I1128 12:13:38.276776 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:38 crc kubenswrapper[5030]: I1128 12:13:38.279598 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:38 crc kubenswrapper[5030]: I1128 12:13:38.309844 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:38 crc kubenswrapper[5030]: I1128 12:13:38.325159 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:38 crc kubenswrapper[5030]: I1128 12:13:38.514934 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:38 crc kubenswrapper[5030]: I1128 12:13:38.517526 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:39 crc kubenswrapper[5030]: I1128 12:13:39.878525 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:39 crc kubenswrapper[5030]: I1128 12:13:39.878632 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:39 crc kubenswrapper[5030]: I1128 12:13:39.928633 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:39 crc kubenswrapper[5030]: I1128 12:13:39.932414 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:40 crc kubenswrapper[5030]: I1128 12:13:40.530677 5030 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 12:13:40 crc kubenswrapper[5030]: I1128 12:13:40.531760 5030 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 12:13:40 crc kubenswrapper[5030]: I1128 12:13:40.531152 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:40 crc kubenswrapper[5030]: I1128 12:13:40.532887 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:41 crc kubenswrapper[5030]: I1128 12:13:41.691134 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:41 crc kubenswrapper[5030]: I1128 12:13:41.691575 5030 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 12:13:41 crc kubenswrapper[5030]: I1128 12:13:41.698951 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:42 crc kubenswrapper[5030]: I1128 12:13:42.549060 5030 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 12:13:42 crc kubenswrapper[5030]: I1128 12:13:42.549124 5030 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 12:13:42 crc kubenswrapper[5030]: I1128 12:13:42.787956 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:42 crc kubenswrapper[5030]: I1128 12:13:42.804228 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:13:42 crc kubenswrapper[5030]: I1128 12:13:42.925760 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:13:43 crc kubenswrapper[5030]: I1128 12:13:43.557892 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="05f53d98-9ca9-49bf-a8fe-1898dd42106b" containerName="glance-log" containerID="cri-o://82abfe096ab1cf6a70f30fc137a6abe82ce1555fe278e4a3b800846379e9830d" gracePeriod=30 Nov 28 12:13:43 crc kubenswrapper[5030]: I1128 12:13:43.558329 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="05f53d98-9ca9-49bf-a8fe-1898dd42106b" containerName="glance-httpd" containerID="cri-o://da829359f9a50fc7d353cc7fce375f30e3e7d3587988e8ac651592d3d983d1b0" gracePeriod=30 Nov 28 12:13:43 crc kubenswrapper[5030]: I1128 12:13:43.564273 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-0" podUID="05f53d98-9ca9-49bf-a8fe-1898dd42106b" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.100:9292/healthcheck\": EOF" Nov 28 12:13:43 crc kubenswrapper[5030]: I1128 12:13:43.564423 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-0" podUID="05f53d98-9ca9-49bf-a8fe-1898dd42106b" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.100:9292/healthcheck\": EOF" Nov 28 12:13:44 crc kubenswrapper[5030]: I1128 12:13:44.567574 5030 generic.go:334] "Generic (PLEG): container finished" podID="05f53d98-9ca9-49bf-a8fe-1898dd42106b" containerID="82abfe096ab1cf6a70f30fc137a6abe82ce1555fe278e4a3b800846379e9830d" exitCode=143 Nov 28 12:13:44 crc kubenswrapper[5030]: I1128 12:13:44.567673 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"05f53d98-9ca9-49bf-a8fe-1898dd42106b","Type":"ContainerDied","Data":"82abfe096ab1cf6a70f30fc137a6abe82ce1555fe278e4a3b800846379e9830d"} Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.547892 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.643907 5030 generic.go:334] "Generic (PLEG): container finished" podID="05f53d98-9ca9-49bf-a8fe-1898dd42106b" containerID="da829359f9a50fc7d353cc7fce375f30e3e7d3587988e8ac651592d3d983d1b0" exitCode=0 Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.643980 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"05f53d98-9ca9-49bf-a8fe-1898dd42106b","Type":"ContainerDied","Data":"da829359f9a50fc7d353cc7fce375f30e3e7d3587988e8ac651592d3d983d1b0"} Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.644060 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"05f53d98-9ca9-49bf-a8fe-1898dd42106b","Type":"ContainerDied","Data":"560131195d951dcfa560b8d5a727d6e1a2be71ece8eed88c4bee5492d5256518"} Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.644083 5030 scope.go:117] "RemoveContainer" containerID="da829359f9a50fc7d353cc7fce375f30e3e7d3587988e8ac651592d3d983d1b0" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.644084 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.663616 5030 scope.go:117] "RemoveContainer" containerID="82abfe096ab1cf6a70f30fc137a6abe82ce1555fe278e4a3b800846379e9830d" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.667120 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-run\") pod \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.667171 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-etc-nvme\") pod \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.667257 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-run" (OuterVolumeSpecName: "run") pod "05f53d98-9ca9-49bf-a8fe-1898dd42106b" (UID: "05f53d98-9ca9-49bf-a8fe-1898dd42106b"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.667276 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-var-locks-brick\") pod \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.667329 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "05f53d98-9ca9-49bf-a8fe-1898dd42106b" (UID: "05f53d98-9ca9-49bf-a8fe-1898dd42106b"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.667371 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "05f53d98-9ca9-49bf-a8fe-1898dd42106b" (UID: "05f53d98-9ca9-49bf-a8fe-1898dd42106b"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.667413 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.667511 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05f53d98-9ca9-49bf-a8fe-1898dd42106b-logs\") pod \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.667544 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-sys\") pod \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.667575 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05f53d98-9ca9-49bf-a8fe-1898dd42106b-scripts\") pod \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.667612 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05f53d98-9ca9-49bf-a8fe-1898dd42106b-httpd-run\") pod \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.667654 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-lib-modules\") pod \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.667670 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-dev\") pod \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.667708 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05f53d98-9ca9-49bf-a8fe-1898dd42106b-config-data\") pod \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.667732 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.667795 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2skbs\" (UniqueName: \"kubernetes.io/projected/05f53d98-9ca9-49bf-a8fe-1898dd42106b-kube-api-access-2skbs\") pod \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.667823 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-etc-iscsi\") pod \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\" (UID: \"05f53d98-9ca9-49bf-a8fe-1898dd42106b\") " Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.669019 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.669041 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.669051 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.669082 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "05f53d98-9ca9-49bf-a8fe-1898dd42106b" (UID: "05f53d98-9ca9-49bf-a8fe-1898dd42106b"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.669107 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "05f53d98-9ca9-49bf-a8fe-1898dd42106b" (UID: "05f53d98-9ca9-49bf-a8fe-1898dd42106b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.669127 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-dev" (OuterVolumeSpecName: "dev") pod "05f53d98-9ca9-49bf-a8fe-1898dd42106b" (UID: "05f53d98-9ca9-49bf-a8fe-1898dd42106b"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.669933 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-sys" (OuterVolumeSpecName: "sys") pod "05f53d98-9ca9-49bf-a8fe-1898dd42106b" (UID: "05f53d98-9ca9-49bf-a8fe-1898dd42106b"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.669964 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05f53d98-9ca9-49bf-a8fe-1898dd42106b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "05f53d98-9ca9-49bf-a8fe-1898dd42106b" (UID: "05f53d98-9ca9-49bf-a8fe-1898dd42106b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.670224 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05f53d98-9ca9-49bf-a8fe-1898dd42106b-logs" (OuterVolumeSpecName: "logs") pod "05f53d98-9ca9-49bf-a8fe-1898dd42106b" (UID: "05f53d98-9ca9-49bf-a8fe-1898dd42106b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.677620 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "05f53d98-9ca9-49bf-a8fe-1898dd42106b" (UID: "05f53d98-9ca9-49bf-a8fe-1898dd42106b"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.678453 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05f53d98-9ca9-49bf-a8fe-1898dd42106b-kube-api-access-2skbs" (OuterVolumeSpecName: "kube-api-access-2skbs") pod "05f53d98-9ca9-49bf-a8fe-1898dd42106b" (UID: "05f53d98-9ca9-49bf-a8fe-1898dd42106b"). InnerVolumeSpecName "kube-api-access-2skbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.681795 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05f53d98-9ca9-49bf-a8fe-1898dd42106b-scripts" (OuterVolumeSpecName: "scripts") pod "05f53d98-9ca9-49bf-a8fe-1898dd42106b" (UID: "05f53d98-9ca9-49bf-a8fe-1898dd42106b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.683031 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance-cache") pod "05f53d98-9ca9-49bf-a8fe-1898dd42106b" (UID: "05f53d98-9ca9-49bf-a8fe-1898dd42106b"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.685729 5030 scope.go:117] "RemoveContainer" containerID="da829359f9a50fc7d353cc7fce375f30e3e7d3587988e8ac651592d3d983d1b0" Nov 28 12:13:49 crc kubenswrapper[5030]: E1128 12:13:49.695772 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da829359f9a50fc7d353cc7fce375f30e3e7d3587988e8ac651592d3d983d1b0\": container with ID starting with da829359f9a50fc7d353cc7fce375f30e3e7d3587988e8ac651592d3d983d1b0 not found: ID does not exist" containerID="da829359f9a50fc7d353cc7fce375f30e3e7d3587988e8ac651592d3d983d1b0" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.695838 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da829359f9a50fc7d353cc7fce375f30e3e7d3587988e8ac651592d3d983d1b0"} err="failed to get container status \"da829359f9a50fc7d353cc7fce375f30e3e7d3587988e8ac651592d3d983d1b0\": rpc error: code = NotFound desc = could not find container \"da829359f9a50fc7d353cc7fce375f30e3e7d3587988e8ac651592d3d983d1b0\": container with ID starting with da829359f9a50fc7d353cc7fce375f30e3e7d3587988e8ac651592d3d983d1b0 not found: ID does not exist" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.695869 5030 scope.go:117] "RemoveContainer" containerID="82abfe096ab1cf6a70f30fc137a6abe82ce1555fe278e4a3b800846379e9830d" Nov 28 12:13:49 crc kubenswrapper[5030]: E1128 12:13:49.696881 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82abfe096ab1cf6a70f30fc137a6abe82ce1555fe278e4a3b800846379e9830d\": container with ID starting with 82abfe096ab1cf6a70f30fc137a6abe82ce1555fe278e4a3b800846379e9830d not found: ID does not exist" containerID="82abfe096ab1cf6a70f30fc137a6abe82ce1555fe278e4a3b800846379e9830d" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.696907 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82abfe096ab1cf6a70f30fc137a6abe82ce1555fe278e4a3b800846379e9830d"} err="failed to get container status \"82abfe096ab1cf6a70f30fc137a6abe82ce1555fe278e4a3b800846379e9830d\": rpc error: code = NotFound desc = could not find container \"82abfe096ab1cf6a70f30fc137a6abe82ce1555fe278e4a3b800846379e9830d\": container with ID starting with 82abfe096ab1cf6a70f30fc137a6abe82ce1555fe278e4a3b800846379e9830d not found: ID does not exist" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.717854 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05f53d98-9ca9-49bf-a8fe-1898dd42106b-config-data" (OuterVolumeSpecName: "config-data") pod "05f53d98-9ca9-49bf-a8fe-1898dd42106b" (UID: "05f53d98-9ca9-49bf-a8fe-1898dd42106b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.771387 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05f53d98-9ca9-49bf-a8fe-1898dd42106b-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.771441 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.771456 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.771491 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05f53d98-9ca9-49bf-a8fe-1898dd42106b-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.771544 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.771559 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2skbs\" (UniqueName: \"kubernetes.io/projected/05f53d98-9ca9-49bf-a8fe-1898dd42106b-kube-api-access-2skbs\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.771576 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.771598 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.771613 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05f53d98-9ca9-49bf-a8fe-1898dd42106b-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.771628 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/05f53d98-9ca9-49bf-a8fe-1898dd42106b-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.771638 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05f53d98-9ca9-49bf-a8fe-1898dd42106b-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.785562 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.800570 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.873882 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.873950 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:13:49 crc kubenswrapper[5030]: I1128 12:13:49.975275 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.007659 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.022642 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:13:50 crc kubenswrapper[5030]: E1128 12:13:50.023037 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f53d98-9ca9-49bf-a8fe-1898dd42106b" containerName="glance-log" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.023056 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f53d98-9ca9-49bf-a8fe-1898dd42106b" containerName="glance-log" Nov 28 12:13:50 crc kubenswrapper[5030]: E1128 12:13:50.023082 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f53d98-9ca9-49bf-a8fe-1898dd42106b" containerName="glance-httpd" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.023089 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f53d98-9ca9-49bf-a8fe-1898dd42106b" containerName="glance-httpd" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.023287 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="05f53d98-9ca9-49bf-a8fe-1898dd42106b" containerName="glance-log" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.023314 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="05f53d98-9ca9-49bf-a8fe-1898dd42106b" containerName="glance-httpd" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.024492 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.050486 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.178238 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-dev\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.178352 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.178372 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33857e9b-1a42-4d31-b345-3ff7e9ea4853-config-data\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.178401 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.178425 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-run\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.178688 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.178758 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33857e9b-1a42-4d31-b345-3ff7e9ea4853-logs\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.178846 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl9pt\" (UniqueName: \"kubernetes.io/projected/33857e9b-1a42-4d31-b345-3ff7e9ea4853-kube-api-access-gl9pt\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.178888 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-etc-nvme\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.179076 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33857e9b-1a42-4d31-b345-3ff7e9ea4853-httpd-run\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.179125 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-sys\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.179239 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33857e9b-1a42-4d31-b345-3ff7e9ea4853-scripts\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.179293 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-lib-modules\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.179337 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.281000 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.281689 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-dev\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.281717 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33857e9b-1a42-4d31-b345-3ff7e9ea4853-config-data\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.281737 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.281761 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.281777 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-dev\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.281857 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-run\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.281208 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.281794 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-run\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.281995 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.282027 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33857e9b-1a42-4d31-b345-3ff7e9ea4853-logs\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.282093 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl9pt\" (UniqueName: \"kubernetes.io/projected/33857e9b-1a42-4d31-b345-3ff7e9ea4853-kube-api-access-gl9pt\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.282125 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-etc-nvme\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.282217 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33857e9b-1a42-4d31-b345-3ff7e9ea4853-httpd-run\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.282239 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-sys\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.282289 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33857e9b-1a42-4d31-b345-3ff7e9ea4853-scripts\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.282414 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-lib-modules\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.282641 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-lib-modules\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.282677 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.282774 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") device mount path \"/mnt/openstack/pv08\"" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.282796 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") device mount path \"/mnt/openstack/pv12\"" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.283193 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33857e9b-1a42-4d31-b345-3ff7e9ea4853-logs\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.283662 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-etc-nvme\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.283971 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33857e9b-1a42-4d31-b345-3ff7e9ea4853-httpd-run\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.284009 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-sys\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.290559 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33857e9b-1a42-4d31-b345-3ff7e9ea4853-config-data\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.290883 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33857e9b-1a42-4d31-b345-3ff7e9ea4853-scripts\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.303844 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl9pt\" (UniqueName: \"kubernetes.io/projected/33857e9b-1a42-4d31-b345-3ff7e9ea4853-kube-api-access-gl9pt\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.313943 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.316297 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-0\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.347197 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.404755 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05f53d98-9ca9-49bf-a8fe-1898dd42106b" path="/var/lib/kubelet/pods/05f53d98-9ca9-49bf-a8fe-1898dd42106b/volumes" Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.588527 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:13:50 crc kubenswrapper[5030]: I1128 12:13:50.663757 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"33857e9b-1a42-4d31-b345-3ff7e9ea4853","Type":"ContainerStarted","Data":"28408b5a69134fbf4675e71d2d3948ae14a43de972f080a4a2e8530c9e28a80f"} Nov 28 12:13:51 crc kubenswrapper[5030]: I1128 12:13:51.678660 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"33857e9b-1a42-4d31-b345-3ff7e9ea4853","Type":"ContainerStarted","Data":"2408f4c5010cd229a797e54928b5b8ec86e4d6c0e9c45c21e5f3a94b5c1fef4e"} Nov 28 12:13:51 crc kubenswrapper[5030]: I1128 12:13:51.679717 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"33857e9b-1a42-4d31-b345-3ff7e9ea4853","Type":"ContainerStarted","Data":"9a5dad09addae6006c32bb5c900accc4fe14eb7a8abac367e81d037155a51d71"} Nov 28 12:14:00 crc kubenswrapper[5030]: I1128 12:14:00.348500 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:00 crc kubenswrapper[5030]: I1128 12:14:00.349302 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:00 crc kubenswrapper[5030]: I1128 12:14:00.406576 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:00 crc kubenswrapper[5030]: I1128 12:14:00.414517 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:00 crc kubenswrapper[5030]: I1128 12:14:00.454097 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=11.454078738 podStartE2EDuration="11.454078738s" podCreationTimestamp="2025-11-28 12:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:13:51.712253653 +0000 UTC m=+1249.653996356" watchObservedRunningTime="2025-11-28 12:14:00.454078738 +0000 UTC m=+1258.395821421" Nov 28 12:14:00 crc kubenswrapper[5030]: I1128 12:14:00.755446 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:00 crc kubenswrapper[5030]: I1128 12:14:00.755891 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:02 crc kubenswrapper[5030]: I1128 12:14:02.971299 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:02 crc kubenswrapper[5030]: I1128 12:14:02.972136 5030 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 12:14:02 crc kubenswrapper[5030]: I1128 12:14:02.973161 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:03 crc kubenswrapper[5030]: I1128 12:14:03.201547 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:14:03 crc kubenswrapper[5030]: I1128 12:14:03.201615 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:14:17 crc kubenswrapper[5030]: I1128 12:14:17.556322 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-jwx77"] Nov 28 12:14:17 crc kubenswrapper[5030]: I1128 12:14:17.565765 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-jwx77"] Nov 28 12:14:17 crc kubenswrapper[5030]: I1128 12:14:17.681940 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-jmdc6"] Nov 28 12:14:17 crc kubenswrapper[5030]: I1128 12:14:17.683123 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-jmdc6" Nov 28 12:14:17 crc kubenswrapper[5030]: I1128 12:14:17.686225 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Nov 28 12:14:17 crc kubenswrapper[5030]: I1128 12:14:17.686660 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"combined-ca-bundle" Nov 28 12:14:17 crc kubenswrapper[5030]: I1128 12:14:17.709736 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-jmdc6"] Nov 28 12:14:17 crc kubenswrapper[5030]: I1128 12:14:17.835366 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e53366a-0b29-4bd7-b4c5-907c24084549-db-sync-config-data\") pod \"glance-db-sync-jmdc6\" (UID: \"7e53366a-0b29-4bd7-b4c5-907c24084549\") " pod="glance-kuttl-tests/glance-db-sync-jmdc6" Nov 28 12:14:17 crc kubenswrapper[5030]: I1128 12:14:17.835540 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e53366a-0b29-4bd7-b4c5-907c24084549-combined-ca-bundle\") pod \"glance-db-sync-jmdc6\" (UID: \"7e53366a-0b29-4bd7-b4c5-907c24084549\") " pod="glance-kuttl-tests/glance-db-sync-jmdc6" Nov 28 12:14:17 crc kubenswrapper[5030]: I1128 12:14:17.835600 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k59dh\" (UniqueName: \"kubernetes.io/projected/7e53366a-0b29-4bd7-b4c5-907c24084549-kube-api-access-k59dh\") pod \"glance-db-sync-jmdc6\" (UID: \"7e53366a-0b29-4bd7-b4c5-907c24084549\") " pod="glance-kuttl-tests/glance-db-sync-jmdc6" Nov 28 12:14:17 crc kubenswrapper[5030]: I1128 12:14:17.835647 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e53366a-0b29-4bd7-b4c5-907c24084549-config-data\") pod \"glance-db-sync-jmdc6\" (UID: \"7e53366a-0b29-4bd7-b4c5-907c24084549\") " pod="glance-kuttl-tests/glance-db-sync-jmdc6" Nov 28 12:14:17 crc kubenswrapper[5030]: I1128 12:14:17.937802 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e53366a-0b29-4bd7-b4c5-907c24084549-db-sync-config-data\") pod \"glance-db-sync-jmdc6\" (UID: \"7e53366a-0b29-4bd7-b4c5-907c24084549\") " pod="glance-kuttl-tests/glance-db-sync-jmdc6" Nov 28 12:14:17 crc kubenswrapper[5030]: I1128 12:14:17.937911 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e53366a-0b29-4bd7-b4c5-907c24084549-combined-ca-bundle\") pod \"glance-db-sync-jmdc6\" (UID: \"7e53366a-0b29-4bd7-b4c5-907c24084549\") " pod="glance-kuttl-tests/glance-db-sync-jmdc6" Nov 28 12:14:17 crc kubenswrapper[5030]: I1128 12:14:17.937971 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k59dh\" (UniqueName: \"kubernetes.io/projected/7e53366a-0b29-4bd7-b4c5-907c24084549-kube-api-access-k59dh\") pod \"glance-db-sync-jmdc6\" (UID: \"7e53366a-0b29-4bd7-b4c5-907c24084549\") " pod="glance-kuttl-tests/glance-db-sync-jmdc6" Nov 28 12:14:17 crc kubenswrapper[5030]: I1128 12:14:17.938019 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e53366a-0b29-4bd7-b4c5-907c24084549-config-data\") pod \"glance-db-sync-jmdc6\" (UID: \"7e53366a-0b29-4bd7-b4c5-907c24084549\") " pod="glance-kuttl-tests/glance-db-sync-jmdc6" Nov 28 12:14:17 crc kubenswrapper[5030]: I1128 12:14:17.949518 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e53366a-0b29-4bd7-b4c5-907c24084549-db-sync-config-data\") pod \"glance-db-sync-jmdc6\" (UID: \"7e53366a-0b29-4bd7-b4c5-907c24084549\") " pod="glance-kuttl-tests/glance-db-sync-jmdc6" Nov 28 12:14:17 crc kubenswrapper[5030]: I1128 12:14:17.953245 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e53366a-0b29-4bd7-b4c5-907c24084549-combined-ca-bundle\") pod \"glance-db-sync-jmdc6\" (UID: \"7e53366a-0b29-4bd7-b4c5-907c24084549\") " pod="glance-kuttl-tests/glance-db-sync-jmdc6" Nov 28 12:14:17 crc kubenswrapper[5030]: I1128 12:14:17.953717 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e53366a-0b29-4bd7-b4c5-907c24084549-config-data\") pod \"glance-db-sync-jmdc6\" (UID: \"7e53366a-0b29-4bd7-b4c5-907c24084549\") " pod="glance-kuttl-tests/glance-db-sync-jmdc6" Nov 28 12:14:17 crc kubenswrapper[5030]: I1128 12:14:17.977684 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k59dh\" (UniqueName: \"kubernetes.io/projected/7e53366a-0b29-4bd7-b4c5-907c24084549-kube-api-access-k59dh\") pod \"glance-db-sync-jmdc6\" (UID: \"7e53366a-0b29-4bd7-b4c5-907c24084549\") " pod="glance-kuttl-tests/glance-db-sync-jmdc6" Nov 28 12:14:18 crc kubenswrapper[5030]: I1128 12:14:18.007157 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-jmdc6" Nov 28 12:14:18 crc kubenswrapper[5030]: I1128 12:14:18.281788 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-jmdc6"] Nov 28 12:14:18 crc kubenswrapper[5030]: I1128 12:14:18.403747 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d87ac6bc-7088-417e-a99b-1a28cc27dacb" path="/var/lib/kubelet/pods/d87ac6bc-7088-417e-a99b-1a28cc27dacb/volumes" Nov 28 12:14:18 crc kubenswrapper[5030]: I1128 12:14:18.938847 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-jmdc6" event={"ID":"7e53366a-0b29-4bd7-b4c5-907c24084549","Type":"ContainerStarted","Data":"493149401bbd1a6501e7e20268a99fffd594bc9c29858532cebe95d26a471967"} Nov 28 12:14:18 crc kubenswrapper[5030]: I1128 12:14:18.940878 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-jmdc6" event={"ID":"7e53366a-0b29-4bd7-b4c5-907c24084549","Type":"ContainerStarted","Data":"47d622642ed00fafcbb88c2bbaa46ae29147bfb5de58d0f31baf8dd35f8d850c"} Nov 28 12:14:18 crc kubenswrapper[5030]: I1128 12:14:18.963940 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-jmdc6" podStartSLOduration=1.963918695 podStartE2EDuration="1.963918695s" podCreationTimestamp="2025-11-28 12:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:14:18.959817814 +0000 UTC m=+1276.901560497" watchObservedRunningTime="2025-11-28 12:14:18.963918695 +0000 UTC m=+1276.905661378" Nov 28 12:14:22 crc kubenswrapper[5030]: I1128 12:14:22.979806 5030 generic.go:334] "Generic (PLEG): container finished" podID="7e53366a-0b29-4bd7-b4c5-907c24084549" containerID="493149401bbd1a6501e7e20268a99fffd594bc9c29858532cebe95d26a471967" exitCode=0 Nov 28 12:14:22 crc kubenswrapper[5030]: I1128 12:14:22.979965 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-jmdc6" event={"ID":"7e53366a-0b29-4bd7-b4c5-907c24084549","Type":"ContainerDied","Data":"493149401bbd1a6501e7e20268a99fffd594bc9c29858532cebe95d26a471967"} Nov 28 12:14:24 crc kubenswrapper[5030]: I1128 12:14:24.439103 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-jmdc6" Nov 28 12:14:24 crc kubenswrapper[5030]: I1128 12:14:24.575749 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e53366a-0b29-4bd7-b4c5-907c24084549-db-sync-config-data\") pod \"7e53366a-0b29-4bd7-b4c5-907c24084549\" (UID: \"7e53366a-0b29-4bd7-b4c5-907c24084549\") " Nov 28 12:14:24 crc kubenswrapper[5030]: I1128 12:14:24.575956 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e53366a-0b29-4bd7-b4c5-907c24084549-config-data\") pod \"7e53366a-0b29-4bd7-b4c5-907c24084549\" (UID: \"7e53366a-0b29-4bd7-b4c5-907c24084549\") " Nov 28 12:14:24 crc kubenswrapper[5030]: I1128 12:14:24.576077 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e53366a-0b29-4bd7-b4c5-907c24084549-combined-ca-bundle\") pod \"7e53366a-0b29-4bd7-b4c5-907c24084549\" (UID: \"7e53366a-0b29-4bd7-b4c5-907c24084549\") " Nov 28 12:14:24 crc kubenswrapper[5030]: I1128 12:14:24.576147 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k59dh\" (UniqueName: \"kubernetes.io/projected/7e53366a-0b29-4bd7-b4c5-907c24084549-kube-api-access-k59dh\") pod \"7e53366a-0b29-4bd7-b4c5-907c24084549\" (UID: \"7e53366a-0b29-4bd7-b4c5-907c24084549\") " Nov 28 12:14:24 crc kubenswrapper[5030]: I1128 12:14:24.585952 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e53366a-0b29-4bd7-b4c5-907c24084549-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7e53366a-0b29-4bd7-b4c5-907c24084549" (UID: "7e53366a-0b29-4bd7-b4c5-907c24084549"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:14:24 crc kubenswrapper[5030]: I1128 12:14:24.586152 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e53366a-0b29-4bd7-b4c5-907c24084549-kube-api-access-k59dh" (OuterVolumeSpecName: "kube-api-access-k59dh") pod "7e53366a-0b29-4bd7-b4c5-907c24084549" (UID: "7e53366a-0b29-4bd7-b4c5-907c24084549"). InnerVolumeSpecName "kube-api-access-k59dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:14:24 crc kubenswrapper[5030]: I1128 12:14:24.617154 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e53366a-0b29-4bd7-b4c5-907c24084549-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e53366a-0b29-4bd7-b4c5-907c24084549" (UID: "7e53366a-0b29-4bd7-b4c5-907c24084549"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:14:24 crc kubenswrapper[5030]: I1128 12:14:24.656851 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e53366a-0b29-4bd7-b4c5-907c24084549-config-data" (OuterVolumeSpecName: "config-data") pod "7e53366a-0b29-4bd7-b4c5-907c24084549" (UID: "7e53366a-0b29-4bd7-b4c5-907c24084549"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:14:24 crc kubenswrapper[5030]: I1128 12:14:24.679361 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k59dh\" (UniqueName: \"kubernetes.io/projected/7e53366a-0b29-4bd7-b4c5-907c24084549-kube-api-access-k59dh\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:24 crc kubenswrapper[5030]: I1128 12:14:24.679414 5030 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e53366a-0b29-4bd7-b4c5-907c24084549-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:24 crc kubenswrapper[5030]: I1128 12:14:24.679435 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e53366a-0b29-4bd7-b4c5-907c24084549-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:24 crc kubenswrapper[5030]: I1128 12:14:24.679455 5030 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e53366a-0b29-4bd7-b4c5-907c24084549-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:25 crc kubenswrapper[5030]: I1128 12:14:25.007447 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-jmdc6" event={"ID":"7e53366a-0b29-4bd7-b4c5-907c24084549","Type":"ContainerDied","Data":"47d622642ed00fafcbb88c2bbaa46ae29147bfb5de58d0f31baf8dd35f8d850c"} Nov 28 12:14:25 crc kubenswrapper[5030]: I1128 12:14:25.007546 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47d622642ed00fafcbb88c2bbaa46ae29147bfb5de58d0f31baf8dd35f8d850c" Nov 28 12:14:25 crc kubenswrapper[5030]: I1128 12:14:25.007619 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-jmdc6" Nov 28 12:14:25 crc kubenswrapper[5030]: I1128 12:14:25.294876 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 28 12:14:25 crc kubenswrapper[5030]: I1128 12:14:25.295850 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" containerName="glance-log" containerID="cri-o://12e6294b1d8f755318350006f2a70097505cf0dd5f2e2bd884f6dc1bf06f7e3a" gracePeriod=30 Nov 28 12:14:25 crc kubenswrapper[5030]: I1128 12:14:25.296120 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" containerName="glance-httpd" containerID="cri-o://c4b8b16ea80c2ed9f1499efa50fdcd8b6ea7b0ff63ec53c733209f0789ed7ed0" gracePeriod=30 Nov 28 12:14:25 crc kubenswrapper[5030]: I1128 12:14:25.315408 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:14:25 crc kubenswrapper[5030]: I1128 12:14:25.315785 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="33857e9b-1a42-4d31-b345-3ff7e9ea4853" containerName="glance-log" containerID="cri-o://9a5dad09addae6006c32bb5c900accc4fe14eb7a8abac367e81d037155a51d71" gracePeriod=30 Nov 28 12:14:25 crc kubenswrapper[5030]: I1128 12:14:25.316320 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="33857e9b-1a42-4d31-b345-3ff7e9ea4853" containerName="glance-httpd" containerID="cri-o://2408f4c5010cd229a797e54928b5b8ec86e4d6c0e9c45c21e5f3a94b5c1fef4e" gracePeriod=30 Nov 28 12:14:26 crc kubenswrapper[5030]: I1128 12:14:26.018915 5030 generic.go:334] "Generic (PLEG): container finished" podID="33857e9b-1a42-4d31-b345-3ff7e9ea4853" containerID="9a5dad09addae6006c32bb5c900accc4fe14eb7a8abac367e81d037155a51d71" exitCode=143 Nov 28 12:14:26 crc kubenswrapper[5030]: I1128 12:14:26.019051 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"33857e9b-1a42-4d31-b345-3ff7e9ea4853","Type":"ContainerDied","Data":"9a5dad09addae6006c32bb5c900accc4fe14eb7a8abac367e81d037155a51d71"} Nov 28 12:14:26 crc kubenswrapper[5030]: I1128 12:14:26.021919 5030 generic.go:334] "Generic (PLEG): container finished" podID="d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" containerID="12e6294b1d8f755318350006f2a70097505cf0dd5f2e2bd884f6dc1bf06f7e3a" exitCode=143 Nov 28 12:14:26 crc kubenswrapper[5030]: I1128 12:14:26.021979 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a","Type":"ContainerDied","Data":"12e6294b1d8f755318350006f2a70097505cf0dd5f2e2bd884f6dc1bf06f7e3a"} Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.898731 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.905915 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.987854 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-etc-iscsi\") pod \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.988214 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.988338 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-logs\") pod \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.988459 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-etc-iscsi\") pod \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.988593 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33857e9b-1a42-4d31-b345-3ff7e9ea4853-httpd-run\") pod \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.988706 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.988046 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "33857e9b-1a42-4d31-b345-3ff7e9ea4853" (UID: "33857e9b-1a42-4d31-b345-3ff7e9ea4853"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.988595 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" (UID: "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.988810 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-dev\") pod \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989030 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33857e9b-1a42-4d31-b345-3ff7e9ea4853-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "33857e9b-1a42-4d31-b345-3ff7e9ea4853" (UID: "33857e9b-1a42-4d31-b345-3ff7e9ea4853"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989043 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-dev" (OuterVolumeSpecName: "dev") pod "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" (UID: "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989055 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-logs" (OuterVolumeSpecName: "logs") pod "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" (UID: "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989038 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33857e9b-1a42-4d31-b345-3ff7e9ea4853-logs\") pod \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989218 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5nzw\" (UniqueName: \"kubernetes.io/projected/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-kube-api-access-q5nzw\") pod \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989274 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl9pt\" (UniqueName: \"kubernetes.io/projected/33857e9b-1a42-4d31-b345-3ff7e9ea4853-kube-api-access-gl9pt\") pod \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989335 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33857e9b-1a42-4d31-b345-3ff7e9ea4853-config-data\") pod \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989381 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-httpd-run\") pod \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989419 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989458 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-lib-modules\") pod \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989516 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989547 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-var-locks-brick\") pod \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989603 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-sys\") pod \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989632 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-sys\") pod \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989669 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-dev\") pod \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989711 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-run\") pod \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989748 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-etc-nvme\") pod \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989789 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-config-data\") pod \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989829 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-etc-nvme\") pod \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989865 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-lib-modules\") pod \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989900 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-scripts\") pod \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989930 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33857e9b-1a42-4d31-b345-3ff7e9ea4853-scripts\") pod \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989961 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-sys" (OuterVolumeSpecName: "sys") pod "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" (UID: "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.989975 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-var-locks-brick\") pod \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\" (UID: \"33857e9b-1a42-4d31-b345-3ff7e9ea4853\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.990037 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-run\") pod \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\" (UID: \"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a\") " Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.990710 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.990742 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.990761 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.990779 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.990795 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33857e9b-1a42-4d31-b345-3ff7e9ea4853-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.990813 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.990868 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-run" (OuterVolumeSpecName: "run") pod "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" (UID: "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.990911 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-sys" (OuterVolumeSpecName: "sys") pod "33857e9b-1a42-4d31-b345-3ff7e9ea4853" (UID: "33857e9b-1a42-4d31-b345-3ff7e9ea4853"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.990950 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-dev" (OuterVolumeSpecName: "dev") pod "33857e9b-1a42-4d31-b345-3ff7e9ea4853" (UID: "33857e9b-1a42-4d31-b345-3ff7e9ea4853"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.990988 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-run" (OuterVolumeSpecName: "run") pod "33857e9b-1a42-4d31-b345-3ff7e9ea4853" (UID: "33857e9b-1a42-4d31-b345-3ff7e9ea4853"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.991030 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "33857e9b-1a42-4d31-b345-3ff7e9ea4853" (UID: "33857e9b-1a42-4d31-b345-3ff7e9ea4853"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.991369 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33857e9b-1a42-4d31-b345-3ff7e9ea4853-logs" (OuterVolumeSpecName: "logs") pod "33857e9b-1a42-4d31-b345-3ff7e9ea4853" (UID: "33857e9b-1a42-4d31-b345-3ff7e9ea4853"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.992610 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "33857e9b-1a42-4d31-b345-3ff7e9ea4853" (UID: "33857e9b-1a42-4d31-b345-3ff7e9ea4853"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.992661 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "33857e9b-1a42-4d31-b345-3ff7e9ea4853" (UID: "33857e9b-1a42-4d31-b345-3ff7e9ea4853"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.992710 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" (UID: "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.993718 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" (UID: "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.993899 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" (UID: "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.995421 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33857e9b-1a42-4d31-b345-3ff7e9ea4853-kube-api-access-gl9pt" (OuterVolumeSpecName: "kube-api-access-gl9pt") pod "33857e9b-1a42-4d31-b345-3ff7e9ea4853" (UID: "33857e9b-1a42-4d31-b345-3ff7e9ea4853"). InnerVolumeSpecName "kube-api-access-gl9pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.995451 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" (UID: "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.995660 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "33857e9b-1a42-4d31-b345-3ff7e9ea4853" (UID: "33857e9b-1a42-4d31-b345-3ff7e9ea4853"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.996377 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-scripts" (OuterVolumeSpecName: "scripts") pod "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" (UID: "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.996520 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" (UID: "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.996520 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-kube-api-access-q5nzw" (OuterVolumeSpecName: "kube-api-access-q5nzw") pod "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" (UID: "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a"). InnerVolumeSpecName "kube-api-access-q5nzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:14:28 crc kubenswrapper[5030]: I1128 12:14:28.996809 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage13-crc" (OuterVolumeSpecName: "glance-cache") pod "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" (UID: "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a"). InnerVolumeSpecName "local-storage13-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:28.997414 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33857e9b-1a42-4d31-b345-3ff7e9ea4853-scripts" (OuterVolumeSpecName: "scripts") pod "33857e9b-1a42-4d31-b345-3ff7e9ea4853" (UID: "33857e9b-1a42-4d31-b345-3ff7e9ea4853"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:28.998122 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance-cache") pod "33857e9b-1a42-4d31-b345-3ff7e9ea4853" (UID: "33857e9b-1a42-4d31-b345-3ff7e9ea4853"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.031412 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-config-data" (OuterVolumeSpecName: "config-data") pod "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" (UID: "d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.050460 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33857e9b-1a42-4d31-b345-3ff7e9ea4853-config-data" (OuterVolumeSpecName: "config-data") pod "33857e9b-1a42-4d31-b345-3ff7e9ea4853" (UID: "33857e9b-1a42-4d31-b345-3ff7e9ea4853"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.066111 5030 generic.go:334] "Generic (PLEG): container finished" podID="33857e9b-1a42-4d31-b345-3ff7e9ea4853" containerID="2408f4c5010cd229a797e54928b5b8ec86e4d6c0e9c45c21e5f3a94b5c1fef4e" exitCode=0 Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.066201 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.066198 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"33857e9b-1a42-4d31-b345-3ff7e9ea4853","Type":"ContainerDied","Data":"2408f4c5010cd229a797e54928b5b8ec86e4d6c0e9c45c21e5f3a94b5c1fef4e"} Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.066294 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"33857e9b-1a42-4d31-b345-3ff7e9ea4853","Type":"ContainerDied","Data":"28408b5a69134fbf4675e71d2d3948ae14a43de972f080a4a2e8530c9e28a80f"} Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.066336 5030 scope.go:117] "RemoveContainer" containerID="2408f4c5010cd229a797e54928b5b8ec86e4d6c0e9c45c21e5f3a94b5c1fef4e" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.069462 5030 generic.go:334] "Generic (PLEG): container finished" podID="d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" containerID="c4b8b16ea80c2ed9f1499efa50fdcd8b6ea7b0ff63ec53c733209f0789ed7ed0" exitCode=0 Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.069515 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a","Type":"ContainerDied","Data":"c4b8b16ea80c2ed9f1499efa50fdcd8b6ea7b0ff63ec53c733209f0789ed7ed0"} Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.069548 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a","Type":"ContainerDied","Data":"e7457502763f3f63d223c6cb952deb48f63ddadda5a3b2fbd5e9de42a7883d15"} Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.069562 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.096379 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.096907 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.096926 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33857e9b-1a42-4d31-b345-3ff7e9ea4853-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.096941 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5nzw\" (UniqueName: \"kubernetes.io/projected/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-kube-api-access-q5nzw\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.096955 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl9pt\" (UniqueName: \"kubernetes.io/projected/33857e9b-1a42-4d31-b345-3ff7e9ea4853-kube-api-access-gl9pt\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.096968 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33857e9b-1a42-4d31-b345-3ff7e9ea4853-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.096983 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.097003 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.097015 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.097027 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.097046 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" " Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.097058 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.097069 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.097081 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.097093 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.097105 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.097119 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.097133 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.097147 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33857e9b-1a42-4d31-b345-3ff7e9ea4853-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.097157 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.097168 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33857e9b-1a42-4d31-b345-3ff7e9ea4853-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.097181 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.099813 5030 scope.go:117] "RemoveContainer" containerID="9a5dad09addae6006c32bb5c900accc4fe14eb7a8abac367e81d037155a51d71" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.116480 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.119375 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.124931 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.127333 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage13-crc" (UniqueName: "kubernetes.io/local-volume/local-storage13-crc") on node "crc" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.127960 5030 scope.go:117] "RemoveContainer" containerID="2408f4c5010cd229a797e54928b5b8ec86e4d6c0e9c45c21e5f3a94b5c1fef4e" Nov 28 12:14:29 crc kubenswrapper[5030]: E1128 12:14:29.128337 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2408f4c5010cd229a797e54928b5b8ec86e4d6c0e9c45c21e5f3a94b5c1fef4e\": container with ID starting with 2408f4c5010cd229a797e54928b5b8ec86e4d6c0e9c45c21e5f3a94b5c1fef4e not found: ID does not exist" containerID="2408f4c5010cd229a797e54928b5b8ec86e4d6c0e9c45c21e5f3a94b5c1fef4e" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.128372 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2408f4c5010cd229a797e54928b5b8ec86e4d6c0e9c45c21e5f3a94b5c1fef4e"} err="failed to get container status \"2408f4c5010cd229a797e54928b5b8ec86e4d6c0e9c45c21e5f3a94b5c1fef4e\": rpc error: code = NotFound desc = could not find container \"2408f4c5010cd229a797e54928b5b8ec86e4d6c0e9c45c21e5f3a94b5c1fef4e\": container with ID starting with 2408f4c5010cd229a797e54928b5b8ec86e4d6c0e9c45c21e5f3a94b5c1fef4e not found: ID does not exist" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.128487 5030 scope.go:117] "RemoveContainer" containerID="9a5dad09addae6006c32bb5c900accc4fe14eb7a8abac367e81d037155a51d71" Nov 28 12:14:29 crc kubenswrapper[5030]: E1128 12:14:29.128686 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a5dad09addae6006c32bb5c900accc4fe14eb7a8abac367e81d037155a51d71\": container with ID starting with 9a5dad09addae6006c32bb5c900accc4fe14eb7a8abac367e81d037155a51d71 not found: ID does not exist" containerID="9a5dad09addae6006c32bb5c900accc4fe14eb7a8abac367e81d037155a51d71" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.128710 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a5dad09addae6006c32bb5c900accc4fe14eb7a8abac367e81d037155a51d71"} err="failed to get container status \"9a5dad09addae6006c32bb5c900accc4fe14eb7a8abac367e81d037155a51d71\": rpc error: code = NotFound desc = could not find container \"9a5dad09addae6006c32bb5c900accc4fe14eb7a8abac367e81d037155a51d71\": container with ID starting with 9a5dad09addae6006c32bb5c900accc4fe14eb7a8abac367e81d037155a51d71 not found: ID does not exist" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.128726 5030 scope.go:117] "RemoveContainer" containerID="c4b8b16ea80c2ed9f1499efa50fdcd8b6ea7b0ff63ec53c733209f0789ed7ed0" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.128941 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.132484 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.139513 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.146338 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.148483 5030 scope.go:117] "RemoveContainer" containerID="12e6294b1d8f755318350006f2a70097505cf0dd5f2e2bd884f6dc1bf06f7e3a" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.164865 5030 scope.go:117] "RemoveContainer" containerID="c4b8b16ea80c2ed9f1499efa50fdcd8b6ea7b0ff63ec53c733209f0789ed7ed0" Nov 28 12:14:29 crc kubenswrapper[5030]: E1128 12:14:29.165235 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4b8b16ea80c2ed9f1499efa50fdcd8b6ea7b0ff63ec53c733209f0789ed7ed0\": container with ID starting with c4b8b16ea80c2ed9f1499efa50fdcd8b6ea7b0ff63ec53c733209f0789ed7ed0 not found: ID does not exist" containerID="c4b8b16ea80c2ed9f1499efa50fdcd8b6ea7b0ff63ec53c733209f0789ed7ed0" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.165286 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4b8b16ea80c2ed9f1499efa50fdcd8b6ea7b0ff63ec53c733209f0789ed7ed0"} err="failed to get container status \"c4b8b16ea80c2ed9f1499efa50fdcd8b6ea7b0ff63ec53c733209f0789ed7ed0\": rpc error: code = NotFound desc = could not find container \"c4b8b16ea80c2ed9f1499efa50fdcd8b6ea7b0ff63ec53c733209f0789ed7ed0\": container with ID starting with c4b8b16ea80c2ed9f1499efa50fdcd8b6ea7b0ff63ec53c733209f0789ed7ed0 not found: ID does not exist" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.165321 5030 scope.go:117] "RemoveContainer" containerID="12e6294b1d8f755318350006f2a70097505cf0dd5f2e2bd884f6dc1bf06f7e3a" Nov 28 12:14:29 crc kubenswrapper[5030]: E1128 12:14:29.166773 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12e6294b1d8f755318350006f2a70097505cf0dd5f2e2bd884f6dc1bf06f7e3a\": container with ID starting with 12e6294b1d8f755318350006f2a70097505cf0dd5f2e2bd884f6dc1bf06f7e3a not found: ID does not exist" containerID="12e6294b1d8f755318350006f2a70097505cf0dd5f2e2bd884f6dc1bf06f7e3a" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.166804 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12e6294b1d8f755318350006f2a70097505cf0dd5f2e2bd884f6dc1bf06f7e3a"} err="failed to get container status \"12e6294b1d8f755318350006f2a70097505cf0dd5f2e2bd884f6dc1bf06f7e3a\": rpc error: code = NotFound desc = could not find container \"12e6294b1d8f755318350006f2a70097505cf0dd5f2e2bd884f6dc1bf06f7e3a\": container with ID starting with 12e6294b1d8f755318350006f2a70097505cf0dd5f2e2bd884f6dc1bf06f7e3a not found: ID does not exist" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.199144 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.199179 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.199188 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:29 crc kubenswrapper[5030]: I1128 12:14:29.199198 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.407922 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33857e9b-1a42-4d31-b345-3ff7e9ea4853" path="/var/lib/kubelet/pods/33857e9b-1a42-4d31-b345-3ff7e9ea4853/volumes" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.409311 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" path="/var/lib/kubelet/pods/d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a/volumes" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.597853 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:14:30 crc kubenswrapper[5030]: E1128 12:14:30.598369 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" containerName="glance-log" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.598397 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" containerName="glance-log" Nov 28 12:14:30 crc kubenswrapper[5030]: E1128 12:14:30.598422 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33857e9b-1a42-4d31-b345-3ff7e9ea4853" containerName="glance-httpd" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.598434 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="33857e9b-1a42-4d31-b345-3ff7e9ea4853" containerName="glance-httpd" Nov 28 12:14:30 crc kubenswrapper[5030]: E1128 12:14:30.598456 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" containerName="glance-httpd" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.598502 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" containerName="glance-httpd" Nov 28 12:14:30 crc kubenswrapper[5030]: E1128 12:14:30.598530 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e53366a-0b29-4bd7-b4c5-907c24084549" containerName="glance-db-sync" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.598546 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e53366a-0b29-4bd7-b4c5-907c24084549" containerName="glance-db-sync" Nov 28 12:14:30 crc kubenswrapper[5030]: E1128 12:14:30.598584 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33857e9b-1a42-4d31-b345-3ff7e9ea4853" containerName="glance-log" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.598599 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="33857e9b-1a42-4d31-b345-3ff7e9ea4853" containerName="glance-log" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.598895 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="33857e9b-1a42-4d31-b345-3ff7e9ea4853" containerName="glance-log" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.598936 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" containerName="glance-httpd" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.598966 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="d62f446b-a0b0-4b8f-b9c4-8ca52ad9fa6a" containerName="glance-log" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.598985 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="33857e9b-1a42-4d31-b345-3ff7e9ea4853" containerName="glance-httpd" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.598999 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e53366a-0b29-4bd7-b4c5-907c24084549" containerName="glance-db-sync" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.609729 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.609923 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.641535 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"cert-glance-default-internal-svc" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.642117 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"cert-glance-default-public-svc" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.642238 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.642382 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"combined-ca-bundle" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.642732 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-single-config-data" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.642935 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-ft8ps" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.743895 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.744300 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.744570 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.744697 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34fcfdf8-5579-400d-9160-e8b7a15c7057-httpd-run\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.744724 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-scripts\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.744779 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vv7g\" (UniqueName: \"kubernetes.io/projected/34fcfdf8-5579-400d-9160-e8b7a15c7057-kube-api-access-5vv7g\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.744872 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.744917 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-config-data\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.745007 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34fcfdf8-5579-400d-9160-e8b7a15c7057-logs\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.847014 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.847097 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.847160 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.847239 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34fcfdf8-5579-400d-9160-e8b7a15c7057-httpd-run\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.847256 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-scripts\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.847306 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vv7g\" (UniqueName: \"kubernetes.io/projected/34fcfdf8-5579-400d-9160-e8b7a15c7057-kube-api-access-5vv7g\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.847367 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.847413 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-config-data\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.847533 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34fcfdf8-5579-400d-9160-e8b7a15c7057-logs\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.848029 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34fcfdf8-5579-400d-9160-e8b7a15c7057-logs\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.848702 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") device mount path \"/mnt/openstack/pv12\"" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.848877 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34fcfdf8-5579-400d-9160-e8b7a15c7057-httpd-run\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.854766 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.857171 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.858030 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.859342 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-config-data\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.859813 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-scripts\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.873181 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vv7g\" (UniqueName: \"kubernetes.io/projected/34fcfdf8-5579-400d-9160-e8b7a15c7057-kube-api-access-5vv7g\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.887736 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-0\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:30 crc kubenswrapper[5030]: I1128 12:14:30.976440 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:31 crc kubenswrapper[5030]: I1128 12:14:31.230559 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:14:32 crc kubenswrapper[5030]: I1128 12:14:32.109030 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"34fcfdf8-5579-400d-9160-e8b7a15c7057","Type":"ContainerStarted","Data":"e0a02d8375e4602bb7dfe9dbbc18b23d8871ad881e99e45a624acb90f89cdcad"} Nov 28 12:14:32 crc kubenswrapper[5030]: I1128 12:14:32.109448 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"34fcfdf8-5579-400d-9160-e8b7a15c7057","Type":"ContainerStarted","Data":"0b5411d1d665e832c7ce9f65e83df083a3e6a73f6acd39cf6ad92d9772584ca2"} Nov 28 12:14:33 crc kubenswrapper[5030]: I1128 12:14:33.120530 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"34fcfdf8-5579-400d-9160-e8b7a15c7057","Type":"ContainerStarted","Data":"be9486dd27e2080053d42e3030feb85405a6d85fcf1af37aba894e801c9de959"} Nov 28 12:14:33 crc kubenswrapper[5030]: I1128 12:14:33.166030 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=3.1660016300000002 podStartE2EDuration="3.16600163s" podCreationTimestamp="2025-11-28 12:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:14:33.157198282 +0000 UTC m=+1291.098940965" watchObservedRunningTime="2025-11-28 12:14:33.16600163 +0000 UTC m=+1291.107744313" Nov 28 12:14:33 crc kubenswrapper[5030]: I1128 12:14:33.201998 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:14:33 crc kubenswrapper[5030]: I1128 12:14:33.202072 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:14:38 crc kubenswrapper[5030]: I1128 12:14:38.311490 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-tztrm" podUID="f3b6b1e4-08cb-4867-b88a-ee08ddcaa045" containerName="registry-server" probeResult="failure" output=< Nov 28 12:14:38 crc kubenswrapper[5030]: timeout: failed to connect service ":50051" within 1s Nov 28 12:14:38 crc kubenswrapper[5030]: > Nov 28 12:14:38 crc kubenswrapper[5030]: I1128 12:14:38.324148 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-tztrm" podUID="f3b6b1e4-08cb-4867-b88a-ee08ddcaa045" containerName="registry-server" probeResult="failure" output=< Nov 28 12:14:38 crc kubenswrapper[5030]: timeout: failed to connect service ":50051" within 1s Nov 28 12:14:38 crc kubenswrapper[5030]: > Nov 28 12:14:40 crc kubenswrapper[5030]: I1128 12:14:40.976841 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:40 crc kubenswrapper[5030]: I1128 12:14:40.977285 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:41 crc kubenswrapper[5030]: I1128 12:14:41.011620 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:41 crc kubenswrapper[5030]: I1128 12:14:41.038925 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:41 crc kubenswrapper[5030]: I1128 12:14:41.220888 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:41 crc kubenswrapper[5030]: I1128 12:14:41.220947 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:43 crc kubenswrapper[5030]: I1128 12:14:43.225209 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:43 crc kubenswrapper[5030]: I1128 12:14:43.230734 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:44 crc kubenswrapper[5030]: I1128 12:14:44.494704 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-jmdc6"] Nov 28 12:14:44 crc kubenswrapper[5030]: I1128 12:14:44.499920 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-jmdc6"] Nov 28 12:14:44 crc kubenswrapper[5030]: I1128 12:14:44.533724 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glanced9a5-account-delete-psbs8"] Nov 28 12:14:44 crc kubenswrapper[5030]: I1128 12:14:44.535426 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glanced9a5-account-delete-psbs8" Nov 28 12:14:44 crc kubenswrapper[5030]: I1128 12:14:44.554948 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glanced9a5-account-delete-psbs8"] Nov 28 12:14:44 crc kubenswrapper[5030]: I1128 12:14:44.621863 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm5m6\" (UniqueName: \"kubernetes.io/projected/f68bd217-fa57-4bd8-98d4-43f67849b6f0-kube-api-access-xm5m6\") pod \"glanced9a5-account-delete-psbs8\" (UID: \"f68bd217-fa57-4bd8-98d4-43f67849b6f0\") " pod="glance-kuttl-tests/glanced9a5-account-delete-psbs8" Nov 28 12:14:44 crc kubenswrapper[5030]: I1128 12:14:44.622127 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f68bd217-fa57-4bd8-98d4-43f67849b6f0-operator-scripts\") pod \"glanced9a5-account-delete-psbs8\" (UID: \"f68bd217-fa57-4bd8-98d4-43f67849b6f0\") " pod="glance-kuttl-tests/glanced9a5-account-delete-psbs8" Nov 28 12:14:44 crc kubenswrapper[5030]: I1128 12:14:44.629284 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:14:44 crc kubenswrapper[5030]: I1128 12:14:44.724576 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f68bd217-fa57-4bd8-98d4-43f67849b6f0-operator-scripts\") pod \"glanced9a5-account-delete-psbs8\" (UID: \"f68bd217-fa57-4bd8-98d4-43f67849b6f0\") " pod="glance-kuttl-tests/glanced9a5-account-delete-psbs8" Nov 28 12:14:44 crc kubenswrapper[5030]: I1128 12:14:44.724721 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm5m6\" (UniqueName: \"kubernetes.io/projected/f68bd217-fa57-4bd8-98d4-43f67849b6f0-kube-api-access-xm5m6\") pod \"glanced9a5-account-delete-psbs8\" (UID: \"f68bd217-fa57-4bd8-98d4-43f67849b6f0\") " pod="glance-kuttl-tests/glanced9a5-account-delete-psbs8" Nov 28 12:14:44 crc kubenswrapper[5030]: I1128 12:14:44.725410 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f68bd217-fa57-4bd8-98d4-43f67849b6f0-operator-scripts\") pod \"glanced9a5-account-delete-psbs8\" (UID: \"f68bd217-fa57-4bd8-98d4-43f67849b6f0\") " pod="glance-kuttl-tests/glanced9a5-account-delete-psbs8" Nov 28 12:14:44 crc kubenswrapper[5030]: I1128 12:14:44.747356 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm5m6\" (UniqueName: \"kubernetes.io/projected/f68bd217-fa57-4bd8-98d4-43f67849b6f0-kube-api-access-xm5m6\") pod \"glanced9a5-account-delete-psbs8\" (UID: \"f68bd217-fa57-4bd8-98d4-43f67849b6f0\") " pod="glance-kuttl-tests/glanced9a5-account-delete-psbs8" Nov 28 12:14:44 crc kubenswrapper[5030]: I1128 12:14:44.869556 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glanced9a5-account-delete-psbs8" Nov 28 12:14:45 crc kubenswrapper[5030]: I1128 12:14:45.133842 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glanced9a5-account-delete-psbs8"] Nov 28 12:14:45 crc kubenswrapper[5030]: I1128 12:14:45.255379 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="34fcfdf8-5579-400d-9160-e8b7a15c7057" containerName="glance-log" containerID="cri-o://e0a02d8375e4602bb7dfe9dbbc18b23d8871ad881e99e45a624acb90f89cdcad" gracePeriod=30 Nov 28 12:14:45 crc kubenswrapper[5030]: I1128 12:14:45.255744 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glanced9a5-account-delete-psbs8" event={"ID":"f68bd217-fa57-4bd8-98d4-43f67849b6f0","Type":"ContainerStarted","Data":"4fa239688b66fdeb0f6c760ae26281b73b2536ea29f9effdf674beb2b3b4e30d"} Nov 28 12:14:45 crc kubenswrapper[5030]: I1128 12:14:45.255805 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="34fcfdf8-5579-400d-9160-e8b7a15c7057" containerName="glance-httpd" containerID="cri-o://be9486dd27e2080053d42e3030feb85405a6d85fcf1af37aba894e801c9de959" gracePeriod=30 Nov 28 12:14:45 crc kubenswrapper[5030]: I1128 12:14:45.262124 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-0" podUID="34fcfdf8-5579-400d-9160-e8b7a15c7057" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.105:9292/healthcheck\": EOF" Nov 28 12:14:46 crc kubenswrapper[5030]: I1128 12:14:46.265506 5030 generic.go:334] "Generic (PLEG): container finished" podID="34fcfdf8-5579-400d-9160-e8b7a15c7057" containerID="e0a02d8375e4602bb7dfe9dbbc18b23d8871ad881e99e45a624acb90f89cdcad" exitCode=143 Nov 28 12:14:46 crc kubenswrapper[5030]: I1128 12:14:46.265601 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"34fcfdf8-5579-400d-9160-e8b7a15c7057","Type":"ContainerDied","Data":"e0a02d8375e4602bb7dfe9dbbc18b23d8871ad881e99e45a624acb90f89cdcad"} Nov 28 12:14:46 crc kubenswrapper[5030]: I1128 12:14:46.267072 5030 generic.go:334] "Generic (PLEG): container finished" podID="f68bd217-fa57-4bd8-98d4-43f67849b6f0" containerID="598c65f16c59bc89f2f9b1657ed8c7dfb935d27b0334227463f1772236905ccf" exitCode=0 Nov 28 12:14:46 crc kubenswrapper[5030]: I1128 12:14:46.267099 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glanced9a5-account-delete-psbs8" event={"ID":"f68bd217-fa57-4bd8-98d4-43f67849b6f0","Type":"ContainerDied","Data":"598c65f16c59bc89f2f9b1657ed8c7dfb935d27b0334227463f1772236905ccf"} Nov 28 12:14:46 crc kubenswrapper[5030]: I1128 12:14:46.402264 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e53366a-0b29-4bd7-b4c5-907c24084549" path="/var/lib/kubelet/pods/7e53366a-0b29-4bd7-b4c5-907c24084549/volumes" Nov 28 12:14:47 crc kubenswrapper[5030]: I1128 12:14:47.582728 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glanced9a5-account-delete-psbs8" Nov 28 12:14:47 crc kubenswrapper[5030]: I1128 12:14:47.677258 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f68bd217-fa57-4bd8-98d4-43f67849b6f0-operator-scripts\") pod \"f68bd217-fa57-4bd8-98d4-43f67849b6f0\" (UID: \"f68bd217-fa57-4bd8-98d4-43f67849b6f0\") " Nov 28 12:14:47 crc kubenswrapper[5030]: I1128 12:14:47.677314 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm5m6\" (UniqueName: \"kubernetes.io/projected/f68bd217-fa57-4bd8-98d4-43f67849b6f0-kube-api-access-xm5m6\") pod \"f68bd217-fa57-4bd8-98d4-43f67849b6f0\" (UID: \"f68bd217-fa57-4bd8-98d4-43f67849b6f0\") " Nov 28 12:14:47 crc kubenswrapper[5030]: I1128 12:14:47.678571 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f68bd217-fa57-4bd8-98d4-43f67849b6f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f68bd217-fa57-4bd8-98d4-43f67849b6f0" (UID: "f68bd217-fa57-4bd8-98d4-43f67849b6f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:14:47 crc kubenswrapper[5030]: I1128 12:14:47.685229 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f68bd217-fa57-4bd8-98d4-43f67849b6f0-kube-api-access-xm5m6" (OuterVolumeSpecName: "kube-api-access-xm5m6") pod "f68bd217-fa57-4bd8-98d4-43f67849b6f0" (UID: "f68bd217-fa57-4bd8-98d4-43f67849b6f0"). InnerVolumeSpecName "kube-api-access-xm5m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:14:47 crc kubenswrapper[5030]: I1128 12:14:47.779663 5030 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f68bd217-fa57-4bd8-98d4-43f67849b6f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:47 crc kubenswrapper[5030]: I1128 12:14:47.779726 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm5m6\" (UniqueName: \"kubernetes.io/projected/f68bd217-fa57-4bd8-98d4-43f67849b6f0-kube-api-access-xm5m6\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:48 crc kubenswrapper[5030]: I1128 12:14:48.296350 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glanced9a5-account-delete-psbs8" event={"ID":"f68bd217-fa57-4bd8-98d4-43f67849b6f0","Type":"ContainerDied","Data":"4fa239688b66fdeb0f6c760ae26281b73b2536ea29f9effdf674beb2b3b4e30d"} Nov 28 12:14:48 crc kubenswrapper[5030]: I1128 12:14:48.296872 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fa239688b66fdeb0f6c760ae26281b73b2536ea29f9effdf674beb2b3b4e30d" Nov 28 12:14:48 crc kubenswrapper[5030]: I1128 12:14:48.296416 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glanced9a5-account-delete-psbs8" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.165401 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.304510 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vv7g\" (UniqueName: \"kubernetes.io/projected/34fcfdf8-5579-400d-9160-e8b7a15c7057-kube-api-access-5vv7g\") pod \"34fcfdf8-5579-400d-9160-e8b7a15c7057\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.304714 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34fcfdf8-5579-400d-9160-e8b7a15c7057-logs\") pod \"34fcfdf8-5579-400d-9160-e8b7a15c7057\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.304835 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-config-data\") pod \"34fcfdf8-5579-400d-9160-e8b7a15c7057\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.304937 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-internal-tls-certs\") pod \"34fcfdf8-5579-400d-9160-e8b7a15c7057\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.304970 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-public-tls-certs\") pod \"34fcfdf8-5579-400d-9160-e8b7a15c7057\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.305001 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-scripts\") pod \"34fcfdf8-5579-400d-9160-e8b7a15c7057\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.305026 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"34fcfdf8-5579-400d-9160-e8b7a15c7057\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.305071 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-combined-ca-bundle\") pod \"34fcfdf8-5579-400d-9160-e8b7a15c7057\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.305121 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34fcfdf8-5579-400d-9160-e8b7a15c7057-httpd-run\") pod \"34fcfdf8-5579-400d-9160-e8b7a15c7057\" (UID: \"34fcfdf8-5579-400d-9160-e8b7a15c7057\") " Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.305960 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34fcfdf8-5579-400d-9160-e8b7a15c7057-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "34fcfdf8-5579-400d-9160-e8b7a15c7057" (UID: "34fcfdf8-5579-400d-9160-e8b7a15c7057"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.306337 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34fcfdf8-5579-400d-9160-e8b7a15c7057-logs" (OuterVolumeSpecName: "logs") pod "34fcfdf8-5579-400d-9160-e8b7a15c7057" (UID: "34fcfdf8-5579-400d-9160-e8b7a15c7057"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.309043 5030 generic.go:334] "Generic (PLEG): container finished" podID="34fcfdf8-5579-400d-9160-e8b7a15c7057" containerID="be9486dd27e2080053d42e3030feb85405a6d85fcf1af37aba894e801c9de959" exitCode=0 Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.309097 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"34fcfdf8-5579-400d-9160-e8b7a15c7057","Type":"ContainerDied","Data":"be9486dd27e2080053d42e3030feb85405a6d85fcf1af37aba894e801c9de959"} Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.309133 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"34fcfdf8-5579-400d-9160-e8b7a15c7057","Type":"ContainerDied","Data":"0b5411d1d665e832c7ce9f65e83df083a3e6a73f6acd39cf6ad92d9772584ca2"} Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.309154 5030 scope.go:117] "RemoveContainer" containerID="be9486dd27e2080053d42e3030feb85405a6d85fcf1af37aba894e801c9de959" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.309151 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.312121 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34fcfdf8-5579-400d-9160-e8b7a15c7057-kube-api-access-5vv7g" (OuterVolumeSpecName: "kube-api-access-5vv7g") pod "34fcfdf8-5579-400d-9160-e8b7a15c7057" (UID: "34fcfdf8-5579-400d-9160-e8b7a15c7057"). InnerVolumeSpecName "kube-api-access-5vv7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.312281 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-scripts" (OuterVolumeSpecName: "scripts") pod "34fcfdf8-5579-400d-9160-e8b7a15c7057" (UID: "34fcfdf8-5579-400d-9160-e8b7a15c7057"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.313927 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "34fcfdf8-5579-400d-9160-e8b7a15c7057" (UID: "34fcfdf8-5579-400d-9160-e8b7a15c7057"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.343025 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34fcfdf8-5579-400d-9160-e8b7a15c7057" (UID: "34fcfdf8-5579-400d-9160-e8b7a15c7057"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.349361 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "34fcfdf8-5579-400d-9160-e8b7a15c7057" (UID: "34fcfdf8-5579-400d-9160-e8b7a15c7057"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.351781 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "34fcfdf8-5579-400d-9160-e8b7a15c7057" (UID: "34fcfdf8-5579-400d-9160-e8b7a15c7057"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.362141 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-config-data" (OuterVolumeSpecName: "config-data") pod "34fcfdf8-5579-400d-9160-e8b7a15c7057" (UID: "34fcfdf8-5579-400d-9160-e8b7a15c7057"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.381522 5030 scope.go:117] "RemoveContainer" containerID="e0a02d8375e4602bb7dfe9dbbc18b23d8871ad881e99e45a624acb90f89cdcad" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.397128 5030 scope.go:117] "RemoveContainer" containerID="be9486dd27e2080053d42e3030feb85405a6d85fcf1af37aba894e801c9de959" Nov 28 12:14:49 crc kubenswrapper[5030]: E1128 12:14:49.397553 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be9486dd27e2080053d42e3030feb85405a6d85fcf1af37aba894e801c9de959\": container with ID starting with be9486dd27e2080053d42e3030feb85405a6d85fcf1af37aba894e801c9de959 not found: ID does not exist" containerID="be9486dd27e2080053d42e3030feb85405a6d85fcf1af37aba894e801c9de959" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.397601 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be9486dd27e2080053d42e3030feb85405a6d85fcf1af37aba894e801c9de959"} err="failed to get container status \"be9486dd27e2080053d42e3030feb85405a6d85fcf1af37aba894e801c9de959\": rpc error: code = NotFound desc = could not find container \"be9486dd27e2080053d42e3030feb85405a6d85fcf1af37aba894e801c9de959\": container with ID starting with be9486dd27e2080053d42e3030feb85405a6d85fcf1af37aba894e801c9de959 not found: ID does not exist" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.397629 5030 scope.go:117] "RemoveContainer" containerID="e0a02d8375e4602bb7dfe9dbbc18b23d8871ad881e99e45a624acb90f89cdcad" Nov 28 12:14:49 crc kubenswrapper[5030]: E1128 12:14:49.397944 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0a02d8375e4602bb7dfe9dbbc18b23d8871ad881e99e45a624acb90f89cdcad\": container with ID starting with e0a02d8375e4602bb7dfe9dbbc18b23d8871ad881e99e45a624acb90f89cdcad not found: ID does not exist" containerID="e0a02d8375e4602bb7dfe9dbbc18b23d8871ad881e99e45a624acb90f89cdcad" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.397988 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a02d8375e4602bb7dfe9dbbc18b23d8871ad881e99e45a624acb90f89cdcad"} err="failed to get container status \"e0a02d8375e4602bb7dfe9dbbc18b23d8871ad881e99e45a624acb90f89cdcad\": rpc error: code = NotFound desc = could not find container \"e0a02d8375e4602bb7dfe9dbbc18b23d8871ad881e99e45a624acb90f89cdcad\": container with ID starting with e0a02d8375e4602bb7dfe9dbbc18b23d8871ad881e99e45a624acb90f89cdcad not found: ID does not exist" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.407136 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34fcfdf8-5579-400d-9160-e8b7a15c7057-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.407164 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.407179 5030 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.407192 5030 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.407204 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.407244 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.407257 5030 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fcfdf8-5579-400d-9160-e8b7a15c7057-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.407269 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34fcfdf8-5579-400d-9160-e8b7a15c7057-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.407282 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vv7g\" (UniqueName: \"kubernetes.io/projected/34fcfdf8-5579-400d-9160-e8b7a15c7057-kube-api-access-5vv7g\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.425303 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.509108 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.555095 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-vwlpj"] Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.569781 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-vwlpj"] Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.574817 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-d9a5-account-create-update-7vnbm"] Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.579578 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glanced9a5-account-delete-psbs8"] Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.584219 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-d9a5-account-create-update-7vnbm"] Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.588826 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glanced9a5-account-delete-psbs8"] Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.643131 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:14:49 crc kubenswrapper[5030]: I1128 12:14:49.648771 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:14:50 crc kubenswrapper[5030]: I1128 12:14:50.410892 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34fcfdf8-5579-400d-9160-e8b7a15c7057" path="/var/lib/kubelet/pods/34fcfdf8-5579-400d-9160-e8b7a15c7057/volumes" Nov 28 12:14:50 crc kubenswrapper[5030]: I1128 12:14:50.413114 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="544e8049-7f6a-400f-a4cc-2fd82beaed9d" path="/var/lib/kubelet/pods/544e8049-7f6a-400f-a4cc-2fd82beaed9d/volumes" Nov 28 12:14:50 crc kubenswrapper[5030]: I1128 12:14:50.414442 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4ae2100-f0e3-45a8-8800-b78ccb909903" path="/var/lib/kubelet/pods/c4ae2100-f0e3-45a8-8800-b78ccb909903/volumes" Nov 28 12:14:50 crc kubenswrapper[5030]: I1128 12:14:50.417399 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f68bd217-fa57-4bd8-98d4-43f67849b6f0" path="/var/lib/kubelet/pods/f68bd217-fa57-4bd8-98d4-43f67849b6f0/volumes" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.505508 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-mrjj6"] Nov 28 12:14:51 crc kubenswrapper[5030]: E1128 12:14:51.505994 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fcfdf8-5579-400d-9160-e8b7a15c7057" containerName="glance-log" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.506012 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fcfdf8-5579-400d-9160-e8b7a15c7057" containerName="glance-log" Nov 28 12:14:51 crc kubenswrapper[5030]: E1128 12:14:51.506027 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f68bd217-fa57-4bd8-98d4-43f67849b6f0" containerName="mariadb-account-delete" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.506036 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="f68bd217-fa57-4bd8-98d4-43f67849b6f0" containerName="mariadb-account-delete" Nov 28 12:14:51 crc kubenswrapper[5030]: E1128 12:14:51.506051 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fcfdf8-5579-400d-9160-e8b7a15c7057" containerName="glance-httpd" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.506058 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fcfdf8-5579-400d-9160-e8b7a15c7057" containerName="glance-httpd" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.506213 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="34fcfdf8-5579-400d-9160-e8b7a15c7057" containerName="glance-log" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.506221 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="34fcfdf8-5579-400d-9160-e8b7a15c7057" containerName="glance-httpd" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.506232 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="f68bd217-fa57-4bd8-98d4-43f67849b6f0" containerName="mariadb-account-delete" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.506948 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-mrjj6" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.517542 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-d958-account-create-update-f7jjt"] Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.518760 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-d958-account-create-update-f7jjt" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.521989 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.531829 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-mrjj6"] Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.538675 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-d958-account-create-update-f7jjt"] Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.541602 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/181c230a-8790-4356-9629-926496d09c14-operator-scripts\") pod \"glance-db-create-mrjj6\" (UID: \"181c230a-8790-4356-9629-926496d09c14\") " pod="glance-kuttl-tests/glance-db-create-mrjj6" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.541892 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tslv2\" (UniqueName: \"kubernetes.io/projected/181c230a-8790-4356-9629-926496d09c14-kube-api-access-tslv2\") pod \"glance-db-create-mrjj6\" (UID: \"181c230a-8790-4356-9629-926496d09c14\") " pod="glance-kuttl-tests/glance-db-create-mrjj6" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.541994 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/491c27aa-ae62-413f-805d-7a9c200f53eb-operator-scripts\") pod \"glance-d958-account-create-update-f7jjt\" (UID: \"491c27aa-ae62-413f-805d-7a9c200f53eb\") " pod="glance-kuttl-tests/glance-d958-account-create-update-f7jjt" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.542078 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98zc9\" (UniqueName: \"kubernetes.io/projected/491c27aa-ae62-413f-805d-7a9c200f53eb-kube-api-access-98zc9\") pod \"glance-d958-account-create-update-f7jjt\" (UID: \"491c27aa-ae62-413f-805d-7a9c200f53eb\") " pod="glance-kuttl-tests/glance-d958-account-create-update-f7jjt" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.643410 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tslv2\" (UniqueName: \"kubernetes.io/projected/181c230a-8790-4356-9629-926496d09c14-kube-api-access-tslv2\") pod \"glance-db-create-mrjj6\" (UID: \"181c230a-8790-4356-9629-926496d09c14\") " pod="glance-kuttl-tests/glance-db-create-mrjj6" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.643790 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/491c27aa-ae62-413f-805d-7a9c200f53eb-operator-scripts\") pod \"glance-d958-account-create-update-f7jjt\" (UID: \"491c27aa-ae62-413f-805d-7a9c200f53eb\") " pod="glance-kuttl-tests/glance-d958-account-create-update-f7jjt" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.644263 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98zc9\" (UniqueName: \"kubernetes.io/projected/491c27aa-ae62-413f-805d-7a9c200f53eb-kube-api-access-98zc9\") pod \"glance-d958-account-create-update-f7jjt\" (UID: \"491c27aa-ae62-413f-805d-7a9c200f53eb\") " pod="glance-kuttl-tests/glance-d958-account-create-update-f7jjt" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.644525 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/181c230a-8790-4356-9629-926496d09c14-operator-scripts\") pod \"glance-db-create-mrjj6\" (UID: \"181c230a-8790-4356-9629-926496d09c14\") " pod="glance-kuttl-tests/glance-db-create-mrjj6" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.644748 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/491c27aa-ae62-413f-805d-7a9c200f53eb-operator-scripts\") pod \"glance-d958-account-create-update-f7jjt\" (UID: \"491c27aa-ae62-413f-805d-7a9c200f53eb\") " pod="glance-kuttl-tests/glance-d958-account-create-update-f7jjt" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.645846 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/181c230a-8790-4356-9629-926496d09c14-operator-scripts\") pod \"glance-db-create-mrjj6\" (UID: \"181c230a-8790-4356-9629-926496d09c14\") " pod="glance-kuttl-tests/glance-db-create-mrjj6" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.662852 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98zc9\" (UniqueName: \"kubernetes.io/projected/491c27aa-ae62-413f-805d-7a9c200f53eb-kube-api-access-98zc9\") pod \"glance-d958-account-create-update-f7jjt\" (UID: \"491c27aa-ae62-413f-805d-7a9c200f53eb\") " pod="glance-kuttl-tests/glance-d958-account-create-update-f7jjt" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.663183 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tslv2\" (UniqueName: \"kubernetes.io/projected/181c230a-8790-4356-9629-926496d09c14-kube-api-access-tslv2\") pod \"glance-db-create-mrjj6\" (UID: \"181c230a-8790-4356-9629-926496d09c14\") " pod="glance-kuttl-tests/glance-db-create-mrjj6" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.858159 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-mrjj6" Nov 28 12:14:51 crc kubenswrapper[5030]: I1128 12:14:51.866694 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-d958-account-create-update-f7jjt" Nov 28 12:14:52 crc kubenswrapper[5030]: I1128 12:14:52.153513 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-d958-account-create-update-f7jjt"] Nov 28 12:14:52 crc kubenswrapper[5030]: I1128 12:14:52.200424 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-mrjj6"] Nov 28 12:14:52 crc kubenswrapper[5030]: W1128 12:14:52.207482 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod181c230a_8790_4356_9629_926496d09c14.slice/crio-cd883fe1cf1e6941035fa14efa134d34abfb42a0d122681fcb47813782c0bfef WatchSource:0}: Error finding container cd883fe1cf1e6941035fa14efa134d34abfb42a0d122681fcb47813782c0bfef: Status 404 returned error can't find the container with id cd883fe1cf1e6941035fa14efa134d34abfb42a0d122681fcb47813782c0bfef Nov 28 12:14:52 crc kubenswrapper[5030]: I1128 12:14:52.344242 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-mrjj6" event={"ID":"181c230a-8790-4356-9629-926496d09c14","Type":"ContainerStarted","Data":"cd883fe1cf1e6941035fa14efa134d34abfb42a0d122681fcb47813782c0bfef"} Nov 28 12:14:52 crc kubenswrapper[5030]: I1128 12:14:52.347273 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-d958-account-create-update-f7jjt" event={"ID":"491c27aa-ae62-413f-805d-7a9c200f53eb","Type":"ContainerStarted","Data":"25727fd35684a6c48ec861f513662d177d4ac71d1893cfacef858d0f2bc33cbc"} Nov 28 12:14:53 crc kubenswrapper[5030]: I1128 12:14:53.358888 5030 generic.go:334] "Generic (PLEG): container finished" podID="491c27aa-ae62-413f-805d-7a9c200f53eb" containerID="4ff6f5fa051a0da9b3e269845c483428d30976b43f619d6e1870f99d46493c67" exitCode=0 Nov 28 12:14:53 crc kubenswrapper[5030]: I1128 12:14:53.359098 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-d958-account-create-update-f7jjt" event={"ID":"491c27aa-ae62-413f-805d-7a9c200f53eb","Type":"ContainerDied","Data":"4ff6f5fa051a0da9b3e269845c483428d30976b43f619d6e1870f99d46493c67"} Nov 28 12:14:53 crc kubenswrapper[5030]: I1128 12:14:53.363628 5030 generic.go:334] "Generic (PLEG): container finished" podID="181c230a-8790-4356-9629-926496d09c14" containerID="b6912f3ec6269a8e89dda6c11fd5325ef3e7e60619ee0487d072caa53376985c" exitCode=0 Nov 28 12:14:53 crc kubenswrapper[5030]: I1128 12:14:53.363693 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-mrjj6" event={"ID":"181c230a-8790-4356-9629-926496d09c14","Type":"ContainerDied","Data":"b6912f3ec6269a8e89dda6c11fd5325ef3e7e60619ee0487d072caa53376985c"} Nov 28 12:14:54 crc kubenswrapper[5030]: I1128 12:14:54.740342 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-mrjj6" Nov 28 12:14:54 crc kubenswrapper[5030]: I1128 12:14:54.747949 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-d958-account-create-update-f7jjt" Nov 28 12:14:54 crc kubenswrapper[5030]: I1128 12:14:54.815852 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/181c230a-8790-4356-9629-926496d09c14-operator-scripts\") pod \"181c230a-8790-4356-9629-926496d09c14\" (UID: \"181c230a-8790-4356-9629-926496d09c14\") " Nov 28 12:14:54 crc kubenswrapper[5030]: I1128 12:14:54.815895 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/491c27aa-ae62-413f-805d-7a9c200f53eb-operator-scripts\") pod \"491c27aa-ae62-413f-805d-7a9c200f53eb\" (UID: \"491c27aa-ae62-413f-805d-7a9c200f53eb\") " Nov 28 12:14:54 crc kubenswrapper[5030]: I1128 12:14:54.815968 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tslv2\" (UniqueName: \"kubernetes.io/projected/181c230a-8790-4356-9629-926496d09c14-kube-api-access-tslv2\") pod \"181c230a-8790-4356-9629-926496d09c14\" (UID: \"181c230a-8790-4356-9629-926496d09c14\") " Nov 28 12:14:54 crc kubenswrapper[5030]: I1128 12:14:54.815995 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98zc9\" (UniqueName: \"kubernetes.io/projected/491c27aa-ae62-413f-805d-7a9c200f53eb-kube-api-access-98zc9\") pod \"491c27aa-ae62-413f-805d-7a9c200f53eb\" (UID: \"491c27aa-ae62-413f-805d-7a9c200f53eb\") " Nov 28 12:14:54 crc kubenswrapper[5030]: I1128 12:14:54.816559 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/181c230a-8790-4356-9629-926496d09c14-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "181c230a-8790-4356-9629-926496d09c14" (UID: "181c230a-8790-4356-9629-926496d09c14"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:14:54 crc kubenswrapper[5030]: I1128 12:14:54.816570 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/491c27aa-ae62-413f-805d-7a9c200f53eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "491c27aa-ae62-413f-805d-7a9c200f53eb" (UID: "491c27aa-ae62-413f-805d-7a9c200f53eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:14:54 crc kubenswrapper[5030]: I1128 12:14:54.817766 5030 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/181c230a-8790-4356-9629-926496d09c14-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:54 crc kubenswrapper[5030]: I1128 12:14:54.817823 5030 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/491c27aa-ae62-413f-805d-7a9c200f53eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:54 crc kubenswrapper[5030]: I1128 12:14:54.822731 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/491c27aa-ae62-413f-805d-7a9c200f53eb-kube-api-access-98zc9" (OuterVolumeSpecName: "kube-api-access-98zc9") pod "491c27aa-ae62-413f-805d-7a9c200f53eb" (UID: "491c27aa-ae62-413f-805d-7a9c200f53eb"). InnerVolumeSpecName "kube-api-access-98zc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:14:54 crc kubenswrapper[5030]: I1128 12:14:54.823269 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/181c230a-8790-4356-9629-926496d09c14-kube-api-access-tslv2" (OuterVolumeSpecName: "kube-api-access-tslv2") pod "181c230a-8790-4356-9629-926496d09c14" (UID: "181c230a-8790-4356-9629-926496d09c14"). InnerVolumeSpecName "kube-api-access-tslv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:14:54 crc kubenswrapper[5030]: I1128 12:14:54.919975 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tslv2\" (UniqueName: \"kubernetes.io/projected/181c230a-8790-4356-9629-926496d09c14-kube-api-access-tslv2\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:54 crc kubenswrapper[5030]: I1128 12:14:54.920421 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98zc9\" (UniqueName: \"kubernetes.io/projected/491c27aa-ae62-413f-805d-7a9c200f53eb-kube-api-access-98zc9\") on node \"crc\" DevicePath \"\"" Nov 28 12:14:55 crc kubenswrapper[5030]: I1128 12:14:55.386182 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-mrjj6" event={"ID":"181c230a-8790-4356-9629-926496d09c14","Type":"ContainerDied","Data":"cd883fe1cf1e6941035fa14efa134d34abfb42a0d122681fcb47813782c0bfef"} Nov 28 12:14:55 crc kubenswrapper[5030]: I1128 12:14:55.386707 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd883fe1cf1e6941035fa14efa134d34abfb42a0d122681fcb47813782c0bfef" Nov 28 12:14:55 crc kubenswrapper[5030]: I1128 12:14:55.386235 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-mrjj6" Nov 28 12:14:55 crc kubenswrapper[5030]: I1128 12:14:55.390177 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-d958-account-create-update-f7jjt" event={"ID":"491c27aa-ae62-413f-805d-7a9c200f53eb","Type":"ContainerDied","Data":"25727fd35684a6c48ec861f513662d177d4ac71d1893cfacef858d0f2bc33cbc"} Nov 28 12:14:55 crc kubenswrapper[5030]: I1128 12:14:55.390276 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25727fd35684a6c48ec861f513662d177d4ac71d1893cfacef858d0f2bc33cbc" Nov 28 12:14:55 crc kubenswrapper[5030]: I1128 12:14:55.390394 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-d958-account-create-update-f7jjt" Nov 28 12:14:56 crc kubenswrapper[5030]: I1128 12:14:56.741079 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-5s47r"] Nov 28 12:14:56 crc kubenswrapper[5030]: E1128 12:14:56.741452 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="181c230a-8790-4356-9629-926496d09c14" containerName="mariadb-database-create" Nov 28 12:14:56 crc kubenswrapper[5030]: I1128 12:14:56.741485 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="181c230a-8790-4356-9629-926496d09c14" containerName="mariadb-database-create" Nov 28 12:14:56 crc kubenswrapper[5030]: E1128 12:14:56.741521 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="491c27aa-ae62-413f-805d-7a9c200f53eb" containerName="mariadb-account-create-update" Nov 28 12:14:56 crc kubenswrapper[5030]: I1128 12:14:56.741529 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="491c27aa-ae62-413f-805d-7a9c200f53eb" containerName="mariadb-account-create-update" Nov 28 12:14:56 crc kubenswrapper[5030]: I1128 12:14:56.741723 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="491c27aa-ae62-413f-805d-7a9c200f53eb" containerName="mariadb-account-create-update" Nov 28 12:14:56 crc kubenswrapper[5030]: I1128 12:14:56.741762 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="181c230a-8790-4356-9629-926496d09c14" containerName="mariadb-database-create" Nov 28 12:14:56 crc kubenswrapper[5030]: I1128 12:14:56.742489 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-5s47r" Nov 28 12:14:56 crc kubenswrapper[5030]: I1128 12:14:56.745053 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Nov 28 12:14:56 crc kubenswrapper[5030]: I1128 12:14:56.745457 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-74pbj" Nov 28 12:14:56 crc kubenswrapper[5030]: I1128 12:14:56.756214 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-5s47r"] Nov 28 12:14:56 crc kubenswrapper[5030]: I1128 12:14:56.854843 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c6tv\" (UniqueName: \"kubernetes.io/projected/44250726-643f-4606-ab0d-d7a89342fea0-kube-api-access-5c6tv\") pod \"glance-db-sync-5s47r\" (UID: \"44250726-643f-4606-ab0d-d7a89342fea0\") " pod="glance-kuttl-tests/glance-db-sync-5s47r" Nov 28 12:14:56 crc kubenswrapper[5030]: I1128 12:14:56.855001 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44250726-643f-4606-ab0d-d7a89342fea0-config-data\") pod \"glance-db-sync-5s47r\" (UID: \"44250726-643f-4606-ab0d-d7a89342fea0\") " pod="glance-kuttl-tests/glance-db-sync-5s47r" Nov 28 12:14:56 crc kubenswrapper[5030]: I1128 12:14:56.855072 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/44250726-643f-4606-ab0d-d7a89342fea0-db-sync-config-data\") pod \"glance-db-sync-5s47r\" (UID: \"44250726-643f-4606-ab0d-d7a89342fea0\") " pod="glance-kuttl-tests/glance-db-sync-5s47r" Nov 28 12:14:56 crc kubenswrapper[5030]: I1128 12:14:56.956954 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/44250726-643f-4606-ab0d-d7a89342fea0-db-sync-config-data\") pod \"glance-db-sync-5s47r\" (UID: \"44250726-643f-4606-ab0d-d7a89342fea0\") " pod="glance-kuttl-tests/glance-db-sync-5s47r" Nov 28 12:14:56 crc kubenswrapper[5030]: I1128 12:14:56.957066 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c6tv\" (UniqueName: \"kubernetes.io/projected/44250726-643f-4606-ab0d-d7a89342fea0-kube-api-access-5c6tv\") pod \"glance-db-sync-5s47r\" (UID: \"44250726-643f-4606-ab0d-d7a89342fea0\") " pod="glance-kuttl-tests/glance-db-sync-5s47r" Nov 28 12:14:56 crc kubenswrapper[5030]: I1128 12:14:56.957134 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44250726-643f-4606-ab0d-d7a89342fea0-config-data\") pod \"glance-db-sync-5s47r\" (UID: \"44250726-643f-4606-ab0d-d7a89342fea0\") " pod="glance-kuttl-tests/glance-db-sync-5s47r" Nov 28 12:14:56 crc kubenswrapper[5030]: I1128 12:14:56.964866 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44250726-643f-4606-ab0d-d7a89342fea0-config-data\") pod \"glance-db-sync-5s47r\" (UID: \"44250726-643f-4606-ab0d-d7a89342fea0\") " pod="glance-kuttl-tests/glance-db-sync-5s47r" Nov 28 12:14:56 crc kubenswrapper[5030]: I1128 12:14:56.974965 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/44250726-643f-4606-ab0d-d7a89342fea0-db-sync-config-data\") pod \"glance-db-sync-5s47r\" (UID: \"44250726-643f-4606-ab0d-d7a89342fea0\") " pod="glance-kuttl-tests/glance-db-sync-5s47r" Nov 28 12:14:56 crc kubenswrapper[5030]: I1128 12:14:56.975750 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c6tv\" (UniqueName: \"kubernetes.io/projected/44250726-643f-4606-ab0d-d7a89342fea0-kube-api-access-5c6tv\") pod \"glance-db-sync-5s47r\" (UID: \"44250726-643f-4606-ab0d-d7a89342fea0\") " pod="glance-kuttl-tests/glance-db-sync-5s47r" Nov 28 12:14:57 crc kubenswrapper[5030]: I1128 12:14:57.122860 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-5s47r" Nov 28 12:14:57 crc kubenswrapper[5030]: I1128 12:14:57.349825 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-5s47r"] Nov 28 12:14:57 crc kubenswrapper[5030]: I1128 12:14:57.410819 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-5s47r" event={"ID":"44250726-643f-4606-ab0d-d7a89342fea0","Type":"ContainerStarted","Data":"4dce725b757d6771f7d0beec70557f5d083eff2535cb4dfc40ecd9c3903d8015"} Nov 28 12:14:58 crc kubenswrapper[5030]: I1128 12:14:58.420709 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-5s47r" event={"ID":"44250726-643f-4606-ab0d-d7a89342fea0","Type":"ContainerStarted","Data":"30ccf97afc548aaa8f7923a9c115f9ec3e13b9c2ee241a9160d46f7a3a95867e"} Nov 28 12:14:58 crc kubenswrapper[5030]: I1128 12:14:58.442661 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-5s47r" podStartSLOduration=2.442633966 podStartE2EDuration="2.442633966s" podCreationTimestamp="2025-11-28 12:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:14:58.436072528 +0000 UTC m=+1316.377815211" watchObservedRunningTime="2025-11-28 12:14:58.442633966 +0000 UTC m=+1316.384376649" Nov 28 12:15:00 crc kubenswrapper[5030]: I1128 12:15:00.132152 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405535-qnhsc"] Nov 28 12:15:00 crc kubenswrapper[5030]: I1128 12:15:00.134649 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-qnhsc" Nov 28 12:15:00 crc kubenswrapper[5030]: I1128 12:15:00.139554 5030 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 12:15:00 crc kubenswrapper[5030]: I1128 12:15:00.140040 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 12:15:00 crc kubenswrapper[5030]: I1128 12:15:00.141728 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405535-qnhsc"] Nov 28 12:15:00 crc kubenswrapper[5030]: I1128 12:15:00.214344 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62481540-944b-4e05-a553-6325f0cf0ef4-config-volume\") pod \"collect-profiles-29405535-qnhsc\" (UID: \"62481540-944b-4e05-a553-6325f0cf0ef4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-qnhsc" Nov 28 12:15:00 crc kubenswrapper[5030]: I1128 12:15:00.214578 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml2tj\" (UniqueName: \"kubernetes.io/projected/62481540-944b-4e05-a553-6325f0cf0ef4-kube-api-access-ml2tj\") pod \"collect-profiles-29405535-qnhsc\" (UID: \"62481540-944b-4e05-a553-6325f0cf0ef4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-qnhsc" Nov 28 12:15:00 crc kubenswrapper[5030]: I1128 12:15:00.214646 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62481540-944b-4e05-a553-6325f0cf0ef4-secret-volume\") pod \"collect-profiles-29405535-qnhsc\" (UID: \"62481540-944b-4e05-a553-6325f0cf0ef4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-qnhsc" Nov 28 12:15:00 crc kubenswrapper[5030]: I1128 12:15:00.316652 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62481540-944b-4e05-a553-6325f0cf0ef4-config-volume\") pod \"collect-profiles-29405535-qnhsc\" (UID: \"62481540-944b-4e05-a553-6325f0cf0ef4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-qnhsc" Nov 28 12:15:00 crc kubenswrapper[5030]: I1128 12:15:00.316730 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml2tj\" (UniqueName: \"kubernetes.io/projected/62481540-944b-4e05-a553-6325f0cf0ef4-kube-api-access-ml2tj\") pod \"collect-profiles-29405535-qnhsc\" (UID: \"62481540-944b-4e05-a553-6325f0cf0ef4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-qnhsc" Nov 28 12:15:00 crc kubenswrapper[5030]: I1128 12:15:00.316772 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62481540-944b-4e05-a553-6325f0cf0ef4-secret-volume\") pod \"collect-profiles-29405535-qnhsc\" (UID: \"62481540-944b-4e05-a553-6325f0cf0ef4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-qnhsc" Nov 28 12:15:00 crc kubenswrapper[5030]: I1128 12:15:00.318162 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62481540-944b-4e05-a553-6325f0cf0ef4-config-volume\") pod \"collect-profiles-29405535-qnhsc\" (UID: \"62481540-944b-4e05-a553-6325f0cf0ef4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-qnhsc" Nov 28 12:15:00 crc kubenswrapper[5030]: I1128 12:15:00.322992 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62481540-944b-4e05-a553-6325f0cf0ef4-secret-volume\") pod \"collect-profiles-29405535-qnhsc\" (UID: \"62481540-944b-4e05-a553-6325f0cf0ef4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-qnhsc" Nov 28 12:15:00 crc kubenswrapper[5030]: I1128 12:15:00.332513 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml2tj\" (UniqueName: \"kubernetes.io/projected/62481540-944b-4e05-a553-6325f0cf0ef4-kube-api-access-ml2tj\") pod \"collect-profiles-29405535-qnhsc\" (UID: \"62481540-944b-4e05-a553-6325f0cf0ef4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-qnhsc" Nov 28 12:15:00 crc kubenswrapper[5030]: I1128 12:15:00.454055 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-qnhsc" Nov 28 12:15:00 crc kubenswrapper[5030]: I1128 12:15:00.927551 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405535-qnhsc"] Nov 28 12:15:00 crc kubenswrapper[5030]: W1128 12:15:00.978832 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62481540_944b_4e05_a553_6325f0cf0ef4.slice/crio-bd4b7a9a86f7dbea9b0757aef09983d8f86e1c46d00f75b4d05a5dbddcefc17a WatchSource:0}: Error finding container bd4b7a9a86f7dbea9b0757aef09983d8f86e1c46d00f75b4d05a5dbddcefc17a: Status 404 returned error can't find the container with id bd4b7a9a86f7dbea9b0757aef09983d8f86e1c46d00f75b4d05a5dbddcefc17a Nov 28 12:15:01 crc kubenswrapper[5030]: I1128 12:15:01.446426 5030 generic.go:334] "Generic (PLEG): container finished" podID="44250726-643f-4606-ab0d-d7a89342fea0" containerID="30ccf97afc548aaa8f7923a9c115f9ec3e13b9c2ee241a9160d46f7a3a95867e" exitCode=0 Nov 28 12:15:01 crc kubenswrapper[5030]: I1128 12:15:01.446570 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-5s47r" event={"ID":"44250726-643f-4606-ab0d-d7a89342fea0","Type":"ContainerDied","Data":"30ccf97afc548aaa8f7923a9c115f9ec3e13b9c2ee241a9160d46f7a3a95867e"} Nov 28 12:15:01 crc kubenswrapper[5030]: I1128 12:15:01.449311 5030 generic.go:334] "Generic (PLEG): container finished" podID="62481540-944b-4e05-a553-6325f0cf0ef4" containerID="6a3153101ce29a1f6b458bd854d5c30e2fc7e9efb1aad59cd666b4891720124e" exitCode=0 Nov 28 12:15:01 crc kubenswrapper[5030]: I1128 12:15:01.449351 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-qnhsc" event={"ID":"62481540-944b-4e05-a553-6325f0cf0ef4","Type":"ContainerDied","Data":"6a3153101ce29a1f6b458bd854d5c30e2fc7e9efb1aad59cd666b4891720124e"} Nov 28 12:15:01 crc kubenswrapper[5030]: I1128 12:15:01.449378 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-qnhsc" event={"ID":"62481540-944b-4e05-a553-6325f0cf0ef4","Type":"ContainerStarted","Data":"bd4b7a9a86f7dbea9b0757aef09983d8f86e1c46d00f75b4d05a5dbddcefc17a"} Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.810559 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-5s47r" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.815660 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-qnhsc" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.860097 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/44250726-643f-4606-ab0d-d7a89342fea0-db-sync-config-data\") pod \"44250726-643f-4606-ab0d-d7a89342fea0\" (UID: \"44250726-643f-4606-ab0d-d7a89342fea0\") " Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.860135 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62481540-944b-4e05-a553-6325f0cf0ef4-secret-volume\") pod \"62481540-944b-4e05-a553-6325f0cf0ef4\" (UID: \"62481540-944b-4e05-a553-6325f0cf0ef4\") " Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.860227 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62481540-944b-4e05-a553-6325f0cf0ef4-config-volume\") pod \"62481540-944b-4e05-a553-6325f0cf0ef4\" (UID: \"62481540-944b-4e05-a553-6325f0cf0ef4\") " Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.860247 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44250726-643f-4606-ab0d-d7a89342fea0-config-data\") pod \"44250726-643f-4606-ab0d-d7a89342fea0\" (UID: \"44250726-643f-4606-ab0d-d7a89342fea0\") " Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.860304 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c6tv\" (UniqueName: \"kubernetes.io/projected/44250726-643f-4606-ab0d-d7a89342fea0-kube-api-access-5c6tv\") pod \"44250726-643f-4606-ab0d-d7a89342fea0\" (UID: \"44250726-643f-4606-ab0d-d7a89342fea0\") " Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.860329 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml2tj\" (UniqueName: \"kubernetes.io/projected/62481540-944b-4e05-a553-6325f0cf0ef4-kube-api-access-ml2tj\") pod \"62481540-944b-4e05-a553-6325f0cf0ef4\" (UID: \"62481540-944b-4e05-a553-6325f0cf0ef4\") " Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.860901 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62481540-944b-4e05-a553-6325f0cf0ef4-config-volume" (OuterVolumeSpecName: "config-volume") pod "62481540-944b-4e05-a553-6325f0cf0ef4" (UID: "62481540-944b-4e05-a553-6325f0cf0ef4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.871908 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62481540-944b-4e05-a553-6325f0cf0ef4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "62481540-944b-4e05-a553-6325f0cf0ef4" (UID: "62481540-944b-4e05-a553-6325f0cf0ef4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.871994 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44250726-643f-4606-ab0d-d7a89342fea0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "44250726-643f-4606-ab0d-d7a89342fea0" (UID: "44250726-643f-4606-ab0d-d7a89342fea0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.872173 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44250726-643f-4606-ab0d-d7a89342fea0-kube-api-access-5c6tv" (OuterVolumeSpecName: "kube-api-access-5c6tv") pod "44250726-643f-4606-ab0d-d7a89342fea0" (UID: "44250726-643f-4606-ab0d-d7a89342fea0"). InnerVolumeSpecName "kube-api-access-5c6tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.875007 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62481540-944b-4e05-a553-6325f0cf0ef4-kube-api-access-ml2tj" (OuterVolumeSpecName: "kube-api-access-ml2tj") pod "62481540-944b-4e05-a553-6325f0cf0ef4" (UID: "62481540-944b-4e05-a553-6325f0cf0ef4"). InnerVolumeSpecName "kube-api-access-ml2tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.900165 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44250726-643f-4606-ab0d-d7a89342fea0-config-data" (OuterVolumeSpecName: "config-data") pod "44250726-643f-4606-ab0d-d7a89342fea0" (UID: "44250726-643f-4606-ab0d-d7a89342fea0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.962159 5030 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62481540-944b-4e05-a553-6325f0cf0ef4-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.962188 5030 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/44250726-643f-4606-ab0d-d7a89342fea0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.962203 5030 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62481540-944b-4e05-a553-6325f0cf0ef4-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.962214 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44250726-643f-4606-ab0d-d7a89342fea0-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.962227 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c6tv\" (UniqueName: \"kubernetes.io/projected/44250726-643f-4606-ab0d-d7a89342fea0-kube-api-access-5c6tv\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:02.962240 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml2tj\" (UniqueName: \"kubernetes.io/projected/62481540-944b-4e05-a553-6325f0cf0ef4-kube-api-access-ml2tj\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:03.202241 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:03.202344 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:03.202431 5030 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:03.203666 5030 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a7058c9055a9b9f831de3e82c6637d0fddb246f761f212b4d9db9f0e85aa948a"} pod="openshift-machine-config-operator/machine-config-daemon-cqr62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:03.203771 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" containerID="cri-o://a7058c9055a9b9f831de3e82c6637d0fddb246f761f212b4d9db9f0e85aa948a" gracePeriod=600 Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:03.477880 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-5s47r" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:03.477893 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-5s47r" event={"ID":"44250726-643f-4606-ab0d-d7a89342fea0","Type":"ContainerDied","Data":"4dce725b757d6771f7d0beec70557f5d083eff2535cb4dfc40ecd9c3903d8015"} Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:03.478105 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dce725b757d6771f7d0beec70557f5d083eff2535cb4dfc40ecd9c3903d8015" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:03.482797 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-qnhsc" event={"ID":"62481540-944b-4e05-a553-6325f0cf0ef4","Type":"ContainerDied","Data":"bd4b7a9a86f7dbea9b0757aef09983d8f86e1c46d00f75b4d05a5dbddcefc17a"} Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:03.482843 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405535-qnhsc" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:03.482864 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd4b7a9a86f7dbea9b0757aef09983d8f86e1c46d00f75b4d05a5dbddcefc17a" Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:03.487388 5030 generic.go:334] "Generic (PLEG): container finished" podID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerID="a7058c9055a9b9f831de3e82c6637d0fddb246f761f212b4d9db9f0e85aa948a" exitCode=0 Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:03.487447 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerDied","Data":"a7058c9055a9b9f831de3e82c6637d0fddb246f761f212b4d9db9f0e85aa948a"} Nov 28 12:15:03 crc kubenswrapper[5030]: I1128 12:15:03.487503 5030 scope.go:117] "RemoveContainer" containerID="2b5a0df1bdf326961f0bfd95e325cb1bcebbae770d53c82e197938a5584c8725" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.508332 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerStarted","Data":"8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c"} Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.626255 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:15:04 crc kubenswrapper[5030]: E1128 12:15:04.626685 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44250726-643f-4606-ab0d-d7a89342fea0" containerName="glance-db-sync" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.626708 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="44250726-643f-4606-ab0d-d7a89342fea0" containerName="glance-db-sync" Nov 28 12:15:04 crc kubenswrapper[5030]: E1128 12:15:04.626727 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62481540-944b-4e05-a553-6325f0cf0ef4" containerName="collect-profiles" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.626737 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="62481540-944b-4e05-a553-6325f0cf0ef4" containerName="collect-profiles" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.626923 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="44250726-643f-4606-ab0d-d7a89342fea0" containerName="glance-db-sync" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.626955 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="62481540-944b-4e05-a553-6325f0cf0ef4" containerName="collect-profiles" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.628294 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.631913 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-external-config-data" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.632183 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-74pbj" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.633053 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.644260 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.693263 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.693354 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.693409 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.693432 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.693531 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395ef274-d899-4e7e-ab5b-558771ced96d-logs\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.693611 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395ef274-d899-4e7e-ab5b-558771ced96d-config-data\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.693708 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395ef274-d899-4e7e-ab5b-558771ced96d-scripts\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.693769 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/395ef274-d899-4e7e-ab5b-558771ced96d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.693799 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-dev\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.693843 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.693867 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-run\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.693958 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7hnd\" (UniqueName: \"kubernetes.io/projected/395ef274-d899-4e7e-ab5b-558771ced96d-kube-api-access-q7hnd\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.694028 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-sys\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.694091 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.795892 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-dev\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.795959 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.795990 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-run\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.796047 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7hnd\" (UniqueName: \"kubernetes.io/projected/395ef274-d899-4e7e-ab5b-558771ced96d-kube-api-access-q7hnd\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.796076 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-sys\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.796085 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-run\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.796099 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.796158 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.796210 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.796239 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.796229 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.796286 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.796262 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.796332 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-sys\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.796383 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395ef274-d899-4e7e-ab5b-558771ced96d-logs\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.796455 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395ef274-d899-4e7e-ab5b-558771ced96d-config-data\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.796654 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395ef274-d899-4e7e-ab5b-558771ced96d-scripts\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.796730 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/395ef274-d899-4e7e-ab5b-558771ced96d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.796392 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.796805 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.796879 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-dev\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.796953 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.797009 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.797496 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/395ef274-d899-4e7e-ab5b-558771ced96d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.797640 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395ef274-d899-4e7e-ab5b-558771ced96d-logs\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.804075 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395ef274-d899-4e7e-ab5b-558771ced96d-scripts\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.808436 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395ef274-d899-4e7e-ab5b-558771ced96d-config-data\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.816135 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7hnd\" (UniqueName: \"kubernetes.io/projected/395ef274-d899-4e7e-ab5b-558771ced96d-kube-api-access-q7hnd\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.819864 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.822664 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.949073 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.981857 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.991409 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:04 crc kubenswrapper[5030]: I1128 12:15:04.998510 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.002154 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.205842 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.205939 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.205994 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-dev\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.206016 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.206052 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.206077 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-sys\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.206105 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.206127 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-run\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.206192 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.206222 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-logs\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.206241 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.206258 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.206283 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dd2c\" (UniqueName: \"kubernetes.io/projected/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-kube-api-access-4dd2c\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.206305 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.308082 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-dev\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.308131 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.308171 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.308194 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-sys\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.308212 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-dev\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.308223 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.308269 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-run\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.308325 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.308329 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.308367 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-sys\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.308432 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-run\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.308450 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.308584 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.308972 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-logs\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.309285 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-logs\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.309321 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.309340 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.309695 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dd2c\" (UniqueName: \"kubernetes.io/projected/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-kube-api-access-4dd2c\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.309723 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.309778 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.309797 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.309817 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.309844 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.309909 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") device mount path \"/mnt/openstack/pv14\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.309913 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.316804 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.318505 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.328099 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dd2c\" (UniqueName: \"kubernetes.io/projected/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-kube-api-access-4dd2c\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.328563 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.334875 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-internal-api-0\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.353637 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.457355 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.527561 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"395ef274-d899-4e7e-ab5b-558771ced96d","Type":"ContainerStarted","Data":"d5cbc57caf6a0870ebf73b427a0162167ac2d82be6152a5d8a2857ec450cdba4"} Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.579272 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:15:05 crc kubenswrapper[5030]: I1128 12:15:05.831019 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:15:05 crc kubenswrapper[5030]: W1128 12:15:05.851834 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a38c9f8_9bac_48d7_9a42_64e50dcbf030.slice/crio-b8e9aa102219627694e7ff953b910e809cb1d20a0f9f32d7725545df9c7d602d WatchSource:0}: Error finding container b8e9aa102219627694e7ff953b910e809cb1d20a0f9f32d7725545df9c7d602d: Status 404 returned error can't find the container with id b8e9aa102219627694e7ff953b910e809cb1d20a0f9f32d7725545df9c7d602d Nov 28 12:15:06 crc kubenswrapper[5030]: I1128 12:15:06.538261 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"395ef274-d899-4e7e-ab5b-558771ced96d","Type":"ContainerStarted","Data":"ef1d79c7af7c140a1262dc26eac1b0ec9b03281fe36a639696ba0a48a5bae4fc"} Nov 28 12:15:06 crc kubenswrapper[5030]: I1128 12:15:06.539093 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"395ef274-d899-4e7e-ab5b-558771ced96d","Type":"ContainerStarted","Data":"8322aa834f96d4e334cf47ba132c9a53b8908bb9de30a7fb2ae235feb7247445"} Nov 28 12:15:06 crc kubenswrapper[5030]: I1128 12:15:06.539107 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"395ef274-d899-4e7e-ab5b-558771ced96d","Type":"ContainerStarted","Data":"0322666126b5f42f4bc2997c2e109f57de95365535161e2d7fb9fac5f1d027ad"} Nov 28 12:15:06 crc kubenswrapper[5030]: I1128 12:15:06.542456 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"3a38c9f8-9bac-48d7-9a42-64e50dcbf030","Type":"ContainerStarted","Data":"8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1"} Nov 28 12:15:06 crc kubenswrapper[5030]: I1128 12:15:06.542600 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"3a38c9f8-9bac-48d7-9a42-64e50dcbf030","Type":"ContainerStarted","Data":"73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5"} Nov 28 12:15:06 crc kubenswrapper[5030]: I1128 12:15:06.542704 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"3a38c9f8-9bac-48d7-9a42-64e50dcbf030","Type":"ContainerStarted","Data":"eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960"} Nov 28 12:15:06 crc kubenswrapper[5030]: I1128 12:15:06.542773 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"3a38c9f8-9bac-48d7-9a42-64e50dcbf030","Type":"ContainerStarted","Data":"b8e9aa102219627694e7ff953b910e809cb1d20a0f9f32d7725545df9c7d602d"} Nov 28 12:15:06 crc kubenswrapper[5030]: I1128 12:15:06.542885 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="3a38c9f8-9bac-48d7-9a42-64e50dcbf030" containerName="glance-api" containerID="cri-o://8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1" gracePeriod=30 Nov 28 12:15:06 crc kubenswrapper[5030]: I1128 12:15:06.542935 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="3a38c9f8-9bac-48d7-9a42-64e50dcbf030" containerName="glance-httpd" containerID="cri-o://73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5" gracePeriod=30 Nov 28 12:15:06 crc kubenswrapper[5030]: I1128 12:15:06.542880 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="3a38c9f8-9bac-48d7-9a42-64e50dcbf030" containerName="glance-log" containerID="cri-o://eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960" gracePeriod=30 Nov 28 12:15:06 crc kubenswrapper[5030]: I1128 12:15:06.594976 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-0" podStartSLOduration=2.59494908 podStartE2EDuration="2.59494908s" podCreationTimestamp="2025-11-28 12:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:15:06.580391176 +0000 UTC m=+1324.522133899" watchObservedRunningTime="2025-11-28 12:15:06.59494908 +0000 UTC m=+1324.536691763" Nov 28 12:15:06 crc kubenswrapper[5030]: I1128 12:15:06.621304 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=3.6212824120000002 podStartE2EDuration="3.621282412s" podCreationTimestamp="2025-11-28 12:15:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:15:06.61457331 +0000 UTC m=+1324.556316033" watchObservedRunningTime="2025-11-28 12:15:06.621282412 +0000 UTC m=+1324.563025115" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.037416 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.142997 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dd2c\" (UniqueName: \"kubernetes.io/projected/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-kube-api-access-4dd2c\") pod \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143071 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-etc-nvme\") pod \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143108 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-sys\") pod \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143140 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-config-data\") pod \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143179 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-lib-modules\") pod \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143207 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-dev\") pod \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143230 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-run\") pod \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143232 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-sys" (OuterVolumeSpecName: "sys") pod "3a38c9f8-9bac-48d7-9a42-64e50dcbf030" (UID: "3a38c9f8-9bac-48d7-9a42-64e50dcbf030"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143272 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143291 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-run" (OuterVolumeSpecName: "run") pod "3a38c9f8-9bac-48d7-9a42-64e50dcbf030" (UID: "3a38c9f8-9bac-48d7-9a42-64e50dcbf030"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143288 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-dev" (OuterVolumeSpecName: "dev") pod "3a38c9f8-9bac-48d7-9a42-64e50dcbf030" (UID: "3a38c9f8-9bac-48d7-9a42-64e50dcbf030"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143303 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-scripts\") pod \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143311 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3a38c9f8-9bac-48d7-9a42-64e50dcbf030" (UID: "3a38c9f8-9bac-48d7-9a42-64e50dcbf030"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143220 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "3a38c9f8-9bac-48d7-9a42-64e50dcbf030" (UID: "3a38c9f8-9bac-48d7-9a42-64e50dcbf030"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143408 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143512 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-etc-iscsi\") pod \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143570 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-httpd-run\") pod \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143606 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "3a38c9f8-9bac-48d7-9a42-64e50dcbf030" (UID: "3a38c9f8-9bac-48d7-9a42-64e50dcbf030"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143620 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-logs\") pod \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143787 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "3a38c9f8-9bac-48d7-9a42-64e50dcbf030" (UID: "3a38c9f8-9bac-48d7-9a42-64e50dcbf030"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.143971 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3a38c9f8-9bac-48d7-9a42-64e50dcbf030" (UID: "3a38c9f8-9bac-48d7-9a42-64e50dcbf030"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.144235 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-logs" (OuterVolumeSpecName: "logs") pod "3a38c9f8-9bac-48d7-9a42-64e50dcbf030" (UID: "3a38c9f8-9bac-48d7-9a42-64e50dcbf030"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.144291 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-var-locks-brick\") pod \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\" (UID: \"3a38c9f8-9bac-48d7-9a42-64e50dcbf030\") " Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.144760 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.144790 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.144804 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.144818 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.144832 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.144843 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.144882 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.144894 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.144905 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.150922 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-scripts" (OuterVolumeSpecName: "scripts") pod "3a38c9f8-9bac-48d7-9a42-64e50dcbf030" (UID: "3a38c9f8-9bac-48d7-9a42-64e50dcbf030"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.151327 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage14-crc" (OuterVolumeSpecName: "glance") pod "3a38c9f8-9bac-48d7-9a42-64e50dcbf030" (UID: "3a38c9f8-9bac-48d7-9a42-64e50dcbf030"). InnerVolumeSpecName "local-storage14-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.153752 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance-cache") pod "3a38c9f8-9bac-48d7-9a42-64e50dcbf030" (UID: "3a38c9f8-9bac-48d7-9a42-64e50dcbf030"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.153978 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-kube-api-access-4dd2c" (OuterVolumeSpecName: "kube-api-access-4dd2c") pod "3a38c9f8-9bac-48d7-9a42-64e50dcbf030" (UID: "3a38c9f8-9bac-48d7-9a42-64e50dcbf030"). InnerVolumeSpecName "kube-api-access-4dd2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.241282 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-config-data" (OuterVolumeSpecName: "config-data") pod "3a38c9f8-9bac-48d7-9a42-64e50dcbf030" (UID: "3a38c9f8-9bac-48d7-9a42-64e50dcbf030"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.246629 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" " Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.246683 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.246707 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.246722 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dd2c\" (UniqueName: \"kubernetes.io/projected/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-kube-api-access-4dd2c\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.246738 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a38c9f8-9bac-48d7-9a42-64e50dcbf030-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.267275 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.268701 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage14-crc" (UniqueName: "kubernetes.io/local-volume/local-storage14-crc") on node "crc" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.347962 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.347994 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.555109 5030 generic.go:334] "Generic (PLEG): container finished" podID="3a38c9f8-9bac-48d7-9a42-64e50dcbf030" containerID="8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1" exitCode=143 Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.555148 5030 generic.go:334] "Generic (PLEG): container finished" podID="3a38c9f8-9bac-48d7-9a42-64e50dcbf030" containerID="73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5" exitCode=143 Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.555159 5030 generic.go:334] "Generic (PLEG): container finished" podID="3a38c9f8-9bac-48d7-9a42-64e50dcbf030" containerID="eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960" exitCode=143 Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.555158 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"3a38c9f8-9bac-48d7-9a42-64e50dcbf030","Type":"ContainerDied","Data":"8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1"} Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.555220 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"3a38c9f8-9bac-48d7-9a42-64e50dcbf030","Type":"ContainerDied","Data":"73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5"} Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.555219 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.555231 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"3a38c9f8-9bac-48d7-9a42-64e50dcbf030","Type":"ContainerDied","Data":"eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960"} Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.555245 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"3a38c9f8-9bac-48d7-9a42-64e50dcbf030","Type":"ContainerDied","Data":"b8e9aa102219627694e7ff953b910e809cb1d20a0f9f32d7725545df9c7d602d"} Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.555276 5030 scope.go:117] "RemoveContainer" containerID="8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.595425 5030 scope.go:117] "RemoveContainer" containerID="73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.600431 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.610750 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.622514 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:15:07 crc kubenswrapper[5030]: E1128 12:15:07.623295 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a38c9f8-9bac-48d7-9a42-64e50dcbf030" containerName="glance-log" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.623314 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a38c9f8-9bac-48d7-9a42-64e50dcbf030" containerName="glance-log" Nov 28 12:15:07 crc kubenswrapper[5030]: E1128 12:15:07.623334 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a38c9f8-9bac-48d7-9a42-64e50dcbf030" containerName="glance-httpd" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.623341 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a38c9f8-9bac-48d7-9a42-64e50dcbf030" containerName="glance-httpd" Nov 28 12:15:07 crc kubenswrapper[5030]: E1128 12:15:07.623362 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a38c9f8-9bac-48d7-9a42-64e50dcbf030" containerName="glance-api" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.623368 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a38c9f8-9bac-48d7-9a42-64e50dcbf030" containerName="glance-api" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.623517 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a38c9f8-9bac-48d7-9a42-64e50dcbf030" containerName="glance-log" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.623539 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a38c9f8-9bac-48d7-9a42-64e50dcbf030" containerName="glance-httpd" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.623556 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a38c9f8-9bac-48d7-9a42-64e50dcbf030" containerName="glance-api" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.624634 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.627206 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.629304 5030 scope.go:117] "RemoveContainer" containerID="eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.651069 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.661077 5030 scope.go:117] "RemoveContainer" containerID="8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1" Nov 28 12:15:07 crc kubenswrapper[5030]: E1128 12:15:07.662629 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1\": container with ID starting with 8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1 not found: ID does not exist" containerID="8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.662702 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1"} err="failed to get container status \"8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1\": rpc error: code = NotFound desc = could not find container \"8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1\": container with ID starting with 8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1 not found: ID does not exist" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.662749 5030 scope.go:117] "RemoveContainer" containerID="73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5" Nov 28 12:15:07 crc kubenswrapper[5030]: E1128 12:15:07.664792 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5\": container with ID starting with 73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5 not found: ID does not exist" containerID="73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.664859 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5"} err="failed to get container status \"73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5\": rpc error: code = NotFound desc = could not find container \"73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5\": container with ID starting with 73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5 not found: ID does not exist" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.664905 5030 scope.go:117] "RemoveContainer" containerID="eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960" Nov 28 12:15:07 crc kubenswrapper[5030]: E1128 12:15:07.666956 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960\": container with ID starting with eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960 not found: ID does not exist" containerID="eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.667006 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960"} err="failed to get container status \"eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960\": rpc error: code = NotFound desc = could not find container \"eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960\": container with ID starting with eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960 not found: ID does not exist" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.667040 5030 scope.go:117] "RemoveContainer" containerID="8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.672806 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1"} err="failed to get container status \"8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1\": rpc error: code = NotFound desc = could not find container \"8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1\": container with ID starting with 8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1 not found: ID does not exist" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.672869 5030 scope.go:117] "RemoveContainer" containerID="73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.673361 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5"} err="failed to get container status \"73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5\": rpc error: code = NotFound desc = could not find container \"73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5\": container with ID starting with 73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5 not found: ID does not exist" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.673403 5030 scope.go:117] "RemoveContainer" containerID="eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.673756 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960"} err="failed to get container status \"eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960\": rpc error: code = NotFound desc = could not find container \"eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960\": container with ID starting with eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960 not found: ID does not exist" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.673794 5030 scope.go:117] "RemoveContainer" containerID="8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.674077 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1"} err="failed to get container status \"8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1\": rpc error: code = NotFound desc = could not find container \"8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1\": container with ID starting with 8dafb529469cd592f16a5eaa78503a4001e1e2da05ae83fc8d2b5f3ec9b620e1 not found: ID does not exist" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.674117 5030 scope.go:117] "RemoveContainer" containerID="73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.674350 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5"} err="failed to get container status \"73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5\": rpc error: code = NotFound desc = could not find container \"73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5\": container with ID starting with 73aa8a3458b3ebfab34d7f522691660b5cd3f5fbb6adcbf304bcf11ce261f3a5 not found: ID does not exist" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.674430 5030 scope.go:117] "RemoveContainer" containerID="eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.674793 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960"} err="failed to get container status \"eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960\": rpc error: code = NotFound desc = could not find container \"eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960\": container with ID starting with eb4a525dc763e40e05ac4df472934d319beea223debd16d3b4cc7e1120afe960 not found: ID does not exist" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.755663 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.755731 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.755756 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-run\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.755777 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.755809 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.755824 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-dev\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.755848 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pgcs\" (UniqueName: \"kubernetes.io/projected/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-kube-api-access-7pgcs\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.755877 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.755898 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.755928 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-sys\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.757415 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.757616 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.757729 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.757841 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.860441 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.860604 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-dev\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.860638 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pgcs\" (UniqueName: \"kubernetes.io/projected/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-kube-api-access-7pgcs\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.860729 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-dev\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.860668 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.860829 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.860851 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.861067 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-sys\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.861105 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.861119 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.861164 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.861196 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.861228 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.861233 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.861399 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.861560 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.861572 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") device mount path \"/mnt/openstack/pv14\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.861607 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-run\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.861643 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.861163 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-sys\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.862375 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-run\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.861197 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.862673 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.863149 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.865374 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.879581 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.880436 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.888029 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pgcs\" (UniqueName: \"kubernetes.io/projected/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-kube-api-access-7pgcs\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.893063 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.909288 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:07 crc kubenswrapper[5030]: I1128 12:15:07.957380 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:08 crc kubenswrapper[5030]: I1128 12:15:08.408759 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a38c9f8-9bac-48d7-9a42-64e50dcbf030" path="/var/lib/kubelet/pods/3a38c9f8-9bac-48d7-9a42-64e50dcbf030/volumes" Nov 28 12:15:08 crc kubenswrapper[5030]: I1128 12:15:08.410742 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:15:08 crc kubenswrapper[5030]: I1128 12:15:08.563083 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b","Type":"ContainerStarted","Data":"a951c1591194cc31c9904badb9f0fe6cff409fc3c103c8a79af2b8f08d717c86"} Nov 28 12:15:09 crc kubenswrapper[5030]: I1128 12:15:09.580211 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b","Type":"ContainerStarted","Data":"1a215e4d076a0727c7e4c026a0b52bd141400105ca9c9a93d398f909d213576d"} Nov 28 12:15:09 crc kubenswrapper[5030]: I1128 12:15:09.581088 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b","Type":"ContainerStarted","Data":"405c1c3048fd34d8bbead6fbcf4674da6085e22d891a970e70dde59903a0d920"} Nov 28 12:15:09 crc kubenswrapper[5030]: I1128 12:15:09.581115 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b","Type":"ContainerStarted","Data":"53bf829139b37feb45ea2e02f34896c09b47a8ff951a01e7a45c94b178a7d230"} Nov 28 12:15:09 crc kubenswrapper[5030]: I1128 12:15:09.628829 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=2.628806694 podStartE2EDuration="2.628806694s" podCreationTimestamp="2025-11-28 12:15:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:15:09.611943128 +0000 UTC m=+1327.553685851" watchObservedRunningTime="2025-11-28 12:15:09.628806694 +0000 UTC m=+1327.570549377" Nov 28 12:15:14 crc kubenswrapper[5030]: I1128 12:15:14.950180 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:14 crc kubenswrapper[5030]: I1128 12:15:14.951014 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:14 crc kubenswrapper[5030]: I1128 12:15:14.951037 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:14 crc kubenswrapper[5030]: I1128 12:15:14.986738 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:14 crc kubenswrapper[5030]: I1128 12:15:14.987491 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:15 crc kubenswrapper[5030]: I1128 12:15:15.005811 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:15 crc kubenswrapper[5030]: I1128 12:15:15.635761 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:15 crc kubenswrapper[5030]: I1128 12:15:15.635837 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:15 crc kubenswrapper[5030]: I1128 12:15:15.635859 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:15 crc kubenswrapper[5030]: I1128 12:15:15.654246 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:15 crc kubenswrapper[5030]: I1128 12:15:15.659064 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:15 crc kubenswrapper[5030]: I1128 12:15:15.665139 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:17 crc kubenswrapper[5030]: I1128 12:15:17.958025 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:17 crc kubenswrapper[5030]: I1128 12:15:17.978336 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:17 crc kubenswrapper[5030]: I1128 12:15:17.978422 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:18 crc kubenswrapper[5030]: I1128 12:15:18.019206 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:18 crc kubenswrapper[5030]: I1128 12:15:18.022384 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:18 crc kubenswrapper[5030]: I1128 12:15:18.053339 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:18 crc kubenswrapper[5030]: I1128 12:15:18.670143 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:18 crc kubenswrapper[5030]: I1128 12:15:18.670229 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:18 crc kubenswrapper[5030]: I1128 12:15:18.670253 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:18 crc kubenswrapper[5030]: I1128 12:15:18.693579 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:18 crc kubenswrapper[5030]: I1128 12:15:18.694033 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:18 crc kubenswrapper[5030]: I1128 12:15:18.695016 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.335539 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.338448 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.349887 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.352303 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.374833 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.391727 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.448130 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.449930 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.451354 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.451400 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-dev\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.451455 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwg9t\" (UniqueName: \"kubernetes.io/projected/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-kube-api-access-lwg9t\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.451537 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/207bcd10-295a-42b0-87e7-c30a3127bc5e-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.451572 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-config-data\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.451598 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-dev\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.451668 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/207bcd10-295a-42b0-87e7-c30a3127bc5e-scripts\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.451739 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.451810 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.451852 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68vwx\" (UniqueName: \"kubernetes.io/projected/207bcd10-295a-42b0-87e7-c30a3127bc5e-kube-api-access-68vwx\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.451905 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.451942 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.451969 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-scripts\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.451994 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.452023 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-sys\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.452050 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/207bcd10-295a-42b0-87e7-c30a3127bc5e-config-data\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.452073 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.452095 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-sys\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.452135 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-logs\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.452161 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.452209 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.452239 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/207bcd10-295a-42b0-87e7-c30a3127bc5e-logs\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.452260 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-run\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.452281 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.452313 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.452337 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-run\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.452372 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.452397 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.464344 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.466906 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.488739 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.500463 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.553335 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-run\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.553503 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-run\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.553387 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-sys\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.554276 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-logs\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.554315 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.554376 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-scripts\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.554431 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5739be95-cccf-4519-9582-8af5c8390e00-scripts\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.554480 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.554531 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.554537 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-dev\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.554596 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.554645 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.554686 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5739be95-cccf-4519-9582-8af5c8390e00-config-data\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.554707 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.555090 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.555321 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-dev\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.555384 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") device mount path \"/mnt/openstack/pv12\"" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.555396 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-dev\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.555554 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.555607 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwg9t\" (UniqueName: \"kubernetes.io/projected/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-kube-api-access-lwg9t\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.557276 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5739be95-cccf-4519-9582-8af5c8390e00-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.557316 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-config-data\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.557385 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/207bcd10-295a-42b0-87e7-c30a3127bc5e-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.557418 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.557443 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-config-data\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.557480 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-dev\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.557537 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/207bcd10-295a-42b0-87e7-c30a3127bc5e-scripts\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.557566 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.557792 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/207bcd10-295a-42b0-87e7-c30a3127bc5e-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.557646 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-dev\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.557837 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.557939 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.557911 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.558073 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68vwx\" (UniqueName: \"kubernetes.io/projected/207bcd10-295a-42b0-87e7-c30a3127bc5e-kube-api-access-68vwx\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.558167 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6z54\" (UniqueName: \"kubernetes.io/projected/5739be95-cccf-4519-9582-8af5c8390e00-kube-api-access-z6z54\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.558357 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.558506 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") device mount path \"/mnt/openstack/pv16\"" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.558608 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.558670 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.558700 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.558740 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.558786 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-scripts\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.558821 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.558875 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.558907 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-sys\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.558950 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/207bcd10-295a-42b0-87e7-c30a3127bc5e-config-data\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.558988 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.559095 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-sys\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.559140 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-sys\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.559184 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.559215 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5739be95-cccf-4519-9582-8af5c8390e00-logs\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.559229 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-sys\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.559325 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.559328 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.559403 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-logs\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.559423 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") device mount path \"/mnt/openstack/pv08\"" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.559519 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.560170 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-logs\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.559451 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.560264 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-run\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.560346 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.560438 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.560496 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.560536 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/207bcd10-295a-42b0-87e7-c30a3127bc5e-logs\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.560565 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-run\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.560594 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-sys\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.560632 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.560686 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9mb9\" (UniqueName: \"kubernetes.io/projected/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-kube-api-access-s9mb9\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.560732 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-run\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.560809 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.560854 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.560892 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.560921 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-dev\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.561168 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") device mount path \"/mnt/openstack/pv10\"" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.561592 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/207bcd10-295a-42b0-87e7-c30a3127bc5e-logs\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.561673 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-run\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.561743 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.561813 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.568952 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/207bcd10-295a-42b0-87e7-c30a3127bc5e-config-data\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.569582 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-scripts\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.570143 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-config-data\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.578883 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwg9t\" (UniqueName: \"kubernetes.io/projected/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-kube-api-access-lwg9t\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.579310 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/207bcd10-295a-42b0-87e7-c30a3127bc5e-scripts\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.581191 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.582800 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.584386 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-2\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.589070 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68vwx\" (UniqueName: \"kubernetes.io/projected/207bcd10-295a-42b0-87e7-c30a3127bc5e-kube-api-access-68vwx\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.596168 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-1\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.662757 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.662830 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5739be95-cccf-4519-9582-8af5c8390e00-logs\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.662853 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.662882 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-run\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.662910 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.662926 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.662942 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-sys\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.662966 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9mb9\" (UniqueName: \"kubernetes.io/projected/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-kube-api-access-s9mb9\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.662983 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-run\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663000 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663015 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-dev\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663034 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663055 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-sys\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663072 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-logs\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663090 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663112 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-scripts\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663130 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5739be95-cccf-4519-9582-8af5c8390e00-scripts\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663150 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-dev\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663174 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663192 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5739be95-cccf-4519-9582-8af5c8390e00-config-data\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663219 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663237 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5739be95-cccf-4519-9582-8af5c8390e00-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663255 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-config-data\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663278 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663309 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663337 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6z54\" (UniqueName: \"kubernetes.io/projected/5739be95-cccf-4519-9582-8af5c8390e00-kube-api-access-z6z54\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663360 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663377 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663512 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663660 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") device mount path \"/mnt/openstack/pv04\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.663954 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.664059 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.664100 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-dev\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.664252 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.664404 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-logs\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.664516 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-sys\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.664963 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-run\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.665058 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.665105 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-dev\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.665174 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") device mount path \"/mnt/openstack/pv17\"" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.665082 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5739be95-cccf-4519-9582-8af5c8390e00-logs\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.665194 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5739be95-cccf-4519-9582-8af5c8390e00-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.665236 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-run\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.665306 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.665379 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.665432 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.665711 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") device mount path \"/mnt/openstack/pv09\"" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.665809 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.665920 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-sys\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.666061 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") device mount path \"/mnt/openstack/pv02\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.670726 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-scripts\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.671449 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.671644 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-config-data\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.682499 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5739be95-cccf-4519-9582-8af5c8390e00-config-data\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.682571 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5739be95-cccf-4519-9582-8af5c8390e00-scripts\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.689167 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.711292 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9mb9\" (UniqueName: \"kubernetes.io/projected/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-kube-api-access-s9mb9\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.718913 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6z54\" (UniqueName: \"kubernetes.io/projected/5739be95-cccf-4519-9582-8af5c8390e00-kube-api-access-z6z54\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.720273 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.731053 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.732898 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-1\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.750166 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-2\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.776289 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:22 crc kubenswrapper[5030]: I1128 12:15:22.795527 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:23 crc kubenswrapper[5030]: I1128 12:15:23.160119 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 28 12:15:23 crc kubenswrapper[5030]: W1128 12:15:23.167575 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod207bcd10_295a_42b0_87e7_c30a3127bc5e.slice/crio-5b988e2018aef7ea322137e468d52dd1bd8c2122cf27e886250230647ceeb051 WatchSource:0}: Error finding container 5b988e2018aef7ea322137e468d52dd1bd8c2122cf27e886250230647ceeb051: Status 404 returned error can't find the container with id 5b988e2018aef7ea322137e468d52dd1bd8c2122cf27e886250230647ceeb051 Nov 28 12:15:23 crc kubenswrapper[5030]: I1128 12:15:23.234177 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 28 12:15:23 crc kubenswrapper[5030]: W1128 12:15:23.240945 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77ffc0b5_a6f5_419b_92cc_21a74d507cc9.slice/crio-62d04adccf0d3f04cd6b57eeefacd833075fb4c64bb7d303b1104a09f1e8a59e WatchSource:0}: Error finding container 62d04adccf0d3f04cd6b57eeefacd833075fb4c64bb7d303b1104a09f1e8a59e: Status 404 returned error can't find the container with id 62d04adccf0d3f04cd6b57eeefacd833075fb4c64bb7d303b1104a09f1e8a59e Nov 28 12:15:23 crc kubenswrapper[5030]: I1128 12:15:23.307253 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:15:23 crc kubenswrapper[5030]: I1128 12:15:23.386196 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 28 12:15:23 crc kubenswrapper[5030]: W1128 12:15:23.406110 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ad9fe36_4bc8_4b0d_831f_f4191dabdabe.slice/crio-d9197a1dbe93d0d45da6d75167a925838a9a67405cd1a42c4eae4d1392f256a1 WatchSource:0}: Error finding container d9197a1dbe93d0d45da6d75167a925838a9a67405cd1a42c4eae4d1392f256a1: Status 404 returned error can't find the container with id d9197a1dbe93d0d45da6d75167a925838a9a67405cd1a42c4eae4d1392f256a1 Nov 28 12:15:23 crc kubenswrapper[5030]: I1128 12:15:23.724872 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"207bcd10-295a-42b0-87e7-c30a3127bc5e","Type":"ContainerStarted","Data":"30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26"} Nov 28 12:15:23 crc kubenswrapper[5030]: I1128 12:15:23.725699 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"207bcd10-295a-42b0-87e7-c30a3127bc5e","Type":"ContainerStarted","Data":"ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6"} Nov 28 12:15:23 crc kubenswrapper[5030]: I1128 12:15:23.725720 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"207bcd10-295a-42b0-87e7-c30a3127bc5e","Type":"ContainerStarted","Data":"5b988e2018aef7ea322137e468d52dd1bd8c2122cf27e886250230647ceeb051"} Nov 28 12:15:23 crc kubenswrapper[5030]: I1128 12:15:23.727087 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe","Type":"ContainerStarted","Data":"a34bfbaec53467c776a11e5f1671ef8fbf58427b5ef030c9d07cdebccf09cda2"} Nov 28 12:15:23 crc kubenswrapper[5030]: I1128 12:15:23.727119 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe","Type":"ContainerStarted","Data":"d9197a1dbe93d0d45da6d75167a925838a9a67405cd1a42c4eae4d1392f256a1"} Nov 28 12:15:23 crc kubenswrapper[5030]: I1128 12:15:23.728668 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"5739be95-cccf-4519-9582-8af5c8390e00","Type":"ContainerStarted","Data":"049386ccb12db1e06f8c031dd8fe18fab6ee9166312f6da2fea7663a93bd66fd"} Nov 28 12:15:23 crc kubenswrapper[5030]: I1128 12:15:23.728723 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"5739be95-cccf-4519-9582-8af5c8390e00","Type":"ContainerStarted","Data":"0ff4d24dbab9aead79bea9431d2882bffef9eb58ade40ee10a379895ea6f54cf"} Nov 28 12:15:23 crc kubenswrapper[5030]: I1128 12:15:23.730554 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"77ffc0b5-a6f5-419b-92cc-21a74d507cc9","Type":"ContainerStarted","Data":"ba87277b0b4715cd8db41bf91f3dc647a49ac3fb396e2a3d7700ce5ab3de8924"} Nov 28 12:15:23 crc kubenswrapper[5030]: I1128 12:15:23.730584 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"77ffc0b5-a6f5-419b-92cc-21a74d507cc9","Type":"ContainerStarted","Data":"46e598fe14730e012da74919bd39e7c73b69416e53ecc6df1460ba87cbd9f72b"} Nov 28 12:15:23 crc kubenswrapper[5030]: I1128 12:15:23.730594 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"77ffc0b5-a6f5-419b-92cc-21a74d507cc9","Type":"ContainerStarted","Data":"62d04adccf0d3f04cd6b57eeefacd833075fb4c64bb7d303b1104a09f1e8a59e"} Nov 28 12:15:24 crc kubenswrapper[5030]: I1128 12:15:24.746323 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"207bcd10-295a-42b0-87e7-c30a3127bc5e","Type":"ContainerStarted","Data":"71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8"} Nov 28 12:15:24 crc kubenswrapper[5030]: I1128 12:15:24.750099 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe","Type":"ContainerStarted","Data":"ac65024a908d685eb774c0ad9ff540e105d72912f03632e0c8f0c5c9f21a8e51"} Nov 28 12:15:24 crc kubenswrapper[5030]: I1128 12:15:24.750165 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe","Type":"ContainerStarted","Data":"96fece53dc5c5c331fc7c3a1303c952a1c48b264f2a006a037a40822b01b1e3d"} Nov 28 12:15:24 crc kubenswrapper[5030]: I1128 12:15:24.753004 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"5739be95-cccf-4519-9582-8af5c8390e00","Type":"ContainerStarted","Data":"e6b1c65bef4b5fdeb274bbf7606c0b5f4b9a6ac6c5f33a4dc245648f080bd23a"} Nov 28 12:15:24 crc kubenswrapper[5030]: I1128 12:15:24.753083 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"5739be95-cccf-4519-9582-8af5c8390e00","Type":"ContainerStarted","Data":"9f4d90b7f52caf6367e59e2491df26df7a1f58defe943648528dadf025546d77"} Nov 28 12:15:24 crc kubenswrapper[5030]: I1128 12:15:24.755537 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"77ffc0b5-a6f5-419b-92cc-21a74d507cc9","Type":"ContainerStarted","Data":"2335ef443a935f8fb54c371729cad25f22762eb839b768352ee13a8dcc992b70"} Nov 28 12:15:24 crc kubenswrapper[5030]: I1128 12:15:24.821206 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-2" podStartSLOduration=3.821185328 podStartE2EDuration="3.821185328s" podCreationTimestamp="2025-11-28 12:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:15:24.819175423 +0000 UTC m=+1342.760918136" watchObservedRunningTime="2025-11-28 12:15:24.821185328 +0000 UTC m=+1342.762928011" Nov 28 12:15:24 crc kubenswrapper[5030]: I1128 12:15:24.821708 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-1" podStartSLOduration=3.8217027310000002 podStartE2EDuration="3.821702731s" podCreationTimestamp="2025-11-28 12:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:15:24.775850831 +0000 UTC m=+1342.717593514" watchObservedRunningTime="2025-11-28 12:15:24.821702731 +0000 UTC m=+1342.763445414" Nov 28 12:15:24 crc kubenswrapper[5030]: I1128 12:15:24.852623 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-1" podStartSLOduration=3.852598137 podStartE2EDuration="3.852598137s" podCreationTimestamp="2025-11-28 12:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:15:24.845971367 +0000 UTC m=+1342.787714050" watchObservedRunningTime="2025-11-28 12:15:24.852598137 +0000 UTC m=+1342.794340820" Nov 28 12:15:24 crc kubenswrapper[5030]: I1128 12:15:24.889370 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-2" podStartSLOduration=3.88934668 podStartE2EDuration="3.88934668s" podCreationTimestamp="2025-11-28 12:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:15:24.884620123 +0000 UTC m=+1342.826362806" watchObservedRunningTime="2025-11-28 12:15:24.88934668 +0000 UTC m=+1342.831089363" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.672292 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.673137 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.673159 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.691560 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.691619 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.691630 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.702495 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.702563 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.716260 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.734963 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.738160 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.740278 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.776531 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.776893 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.777052 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.797161 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.797225 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.797242 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.813344 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.837576 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.844755 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.845853 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.845999 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.846030 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.846044 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.846057 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.846068 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.846083 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.847865 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.848825 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.862725 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.863664 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.864079 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.866962 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.867944 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.868316 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.869165 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.869664 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.869956 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.871405 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.873789 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:32 crc kubenswrapper[5030]: I1128 12:15:32.875639 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:33 crc kubenswrapper[5030]: I1128 12:15:33.855403 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:33 crc kubenswrapper[5030]: I1128 12:15:33.855460 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:33 crc kubenswrapper[5030]: I1128 12:15:33.855744 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:33 crc kubenswrapper[5030]: I1128 12:15:33.860163 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:33 crc kubenswrapper[5030]: I1128 12:15:33.863511 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:33 crc kubenswrapper[5030]: I1128 12:15:33.871703 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:34 crc kubenswrapper[5030]: I1128 12:15:34.897085 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 28 12:15:34 crc kubenswrapper[5030]: I1128 12:15:34.908718 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 28 12:15:35 crc kubenswrapper[5030]: I1128 12:15:35.088423 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 28 12:15:35 crc kubenswrapper[5030]: I1128 12:15:35.100405 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:15:35 crc kubenswrapper[5030]: I1128 12:15:35.869112 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="207bcd10-295a-42b0-87e7-c30a3127bc5e" containerName="glance-log" containerID="cri-o://ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6" gracePeriod=30 Nov 28 12:15:35 crc kubenswrapper[5030]: I1128 12:15:35.869302 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="207bcd10-295a-42b0-87e7-c30a3127bc5e" containerName="glance-api" containerID="cri-o://71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8" gracePeriod=30 Nov 28 12:15:35 crc kubenswrapper[5030]: I1128 12:15:35.869332 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="207bcd10-295a-42b0-87e7-c30a3127bc5e" containerName="glance-httpd" containerID="cri-o://30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26" gracePeriod=30 Nov 28 12:15:35 crc kubenswrapper[5030]: I1128 12:15:35.870400 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="77ffc0b5-a6f5-419b-92cc-21a74d507cc9" containerName="glance-httpd" containerID="cri-o://ba87277b0b4715cd8db41bf91f3dc647a49ac3fb396e2a3d7700ce5ab3de8924" gracePeriod=30 Nov 28 12:15:35 crc kubenswrapper[5030]: I1128 12:15:35.870392 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="77ffc0b5-a6f5-419b-92cc-21a74d507cc9" containerName="glance-api" containerID="cri-o://2335ef443a935f8fb54c371729cad25f22762eb839b768352ee13a8dcc992b70" gracePeriod=30 Nov 28 12:15:35 crc kubenswrapper[5030]: I1128 12:15:35.870688 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="77ffc0b5-a6f5-419b-92cc-21a74d507cc9" containerName="glance-log" containerID="cri-o://46e598fe14730e012da74919bd39e7c73b69416e53ecc6df1460ba87cbd9f72b" gracePeriod=30 Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.791764 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.879622 5030 generic.go:334] "Generic (PLEG): container finished" podID="207bcd10-295a-42b0-87e7-c30a3127bc5e" containerID="71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8" exitCode=0 Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.879687 5030 generic.go:334] "Generic (PLEG): container finished" podID="207bcd10-295a-42b0-87e7-c30a3127bc5e" containerID="30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26" exitCode=0 Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.879696 5030 generic.go:334] "Generic (PLEG): container finished" podID="207bcd10-295a-42b0-87e7-c30a3127bc5e" containerID="ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6" exitCode=143 Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.879987 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.880058 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"207bcd10-295a-42b0-87e7-c30a3127bc5e","Type":"ContainerDied","Data":"71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8"} Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.880106 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"207bcd10-295a-42b0-87e7-c30a3127bc5e","Type":"ContainerDied","Data":"30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26"} Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.880119 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"207bcd10-295a-42b0-87e7-c30a3127bc5e","Type":"ContainerDied","Data":"ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6"} Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.880131 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"207bcd10-295a-42b0-87e7-c30a3127bc5e","Type":"ContainerDied","Data":"5b988e2018aef7ea322137e468d52dd1bd8c2122cf27e886250230647ceeb051"} Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.880151 5030 scope.go:117] "RemoveContainer" containerID="71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8" Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.886579 5030 generic.go:334] "Generic (PLEG): container finished" podID="77ffc0b5-a6f5-419b-92cc-21a74d507cc9" containerID="2335ef443a935f8fb54c371729cad25f22762eb839b768352ee13a8dcc992b70" exitCode=0 Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.886601 5030 generic.go:334] "Generic (PLEG): container finished" podID="77ffc0b5-a6f5-419b-92cc-21a74d507cc9" containerID="ba87277b0b4715cd8db41bf91f3dc647a49ac3fb396e2a3d7700ce5ab3de8924" exitCode=0 Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.886611 5030 generic.go:334] "Generic (PLEG): container finished" podID="77ffc0b5-a6f5-419b-92cc-21a74d507cc9" containerID="46e598fe14730e012da74919bd39e7c73b69416e53ecc6df1460ba87cbd9f72b" exitCode=143 Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.886836 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="5739be95-cccf-4519-9582-8af5c8390e00" containerName="glance-log" containerID="cri-o://049386ccb12db1e06f8c031dd8fe18fab6ee9166312f6da2fea7663a93bd66fd" gracePeriod=30 Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.887078 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"77ffc0b5-a6f5-419b-92cc-21a74d507cc9","Type":"ContainerDied","Data":"2335ef443a935f8fb54c371729cad25f22762eb839b768352ee13a8dcc992b70"} Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.887112 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"77ffc0b5-a6f5-419b-92cc-21a74d507cc9","Type":"ContainerDied","Data":"ba87277b0b4715cd8db41bf91f3dc647a49ac3fb396e2a3d7700ce5ab3de8924"} Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.887122 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"77ffc0b5-a6f5-419b-92cc-21a74d507cc9","Type":"ContainerDied","Data":"46e598fe14730e012da74919bd39e7c73b69416e53ecc6df1460ba87cbd9f72b"} Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.887132 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"77ffc0b5-a6f5-419b-92cc-21a74d507cc9","Type":"ContainerDied","Data":"62d04adccf0d3f04cd6b57eeefacd833075fb4c64bb7d303b1104a09f1e8a59e"} Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.887140 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62d04adccf0d3f04cd6b57eeefacd833075fb4c64bb7d303b1104a09f1e8a59e" Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.887318 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" containerName="glance-log" containerID="cri-o://a34bfbaec53467c776a11e5f1671ef8fbf58427b5ef030c9d07cdebccf09cda2" gracePeriod=30 Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.887611 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="5739be95-cccf-4519-9582-8af5c8390e00" containerName="glance-api" containerID="cri-o://e6b1c65bef4b5fdeb274bbf7606c0b5f4b9a6ac6c5f33a4dc245648f080bd23a" gracePeriod=30 Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.887884 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="5739be95-cccf-4519-9582-8af5c8390e00" containerName="glance-httpd" containerID="cri-o://9f4d90b7f52caf6367e59e2491df26df7a1f58defe943648528dadf025546d77" gracePeriod=30 Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.888107 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" containerName="glance-api" containerID="cri-o://ac65024a908d685eb774c0ad9ff540e105d72912f03632e0c8f0c5c9f21a8e51" gracePeriod=30 Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.888116 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" containerName="glance-httpd" containerID="cri-o://96fece53dc5c5c331fc7c3a1303c952a1c48b264f2a006a037a40822b01b1e3d" gracePeriod=30 Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.908193 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.937196 5030 scope.go:117] "RemoveContainer" containerID="30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26" Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.968784 5030 scope.go:117] "RemoveContainer" containerID="ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6" Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.978014 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/207bcd10-295a-42b0-87e7-c30a3127bc5e-logs\") pod \"207bcd10-295a-42b0-87e7-c30a3127bc5e\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.978113 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"207bcd10-295a-42b0-87e7-c30a3127bc5e\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.978133 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-etc-iscsi\") pod \"207bcd10-295a-42b0-87e7-c30a3127bc5e\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.978154 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/207bcd10-295a-42b0-87e7-c30a3127bc5e-httpd-run\") pod \"207bcd10-295a-42b0-87e7-c30a3127bc5e\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.978179 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68vwx\" (UniqueName: \"kubernetes.io/projected/207bcd10-295a-42b0-87e7-c30a3127bc5e-kube-api-access-68vwx\") pod \"207bcd10-295a-42b0-87e7-c30a3127bc5e\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.978276 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-lib-modules\") pod \"207bcd10-295a-42b0-87e7-c30a3127bc5e\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.978338 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"207bcd10-295a-42b0-87e7-c30a3127bc5e\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.978376 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/207bcd10-295a-42b0-87e7-c30a3127bc5e-config-data\") pod \"207bcd10-295a-42b0-87e7-c30a3127bc5e\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.978403 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-dev\") pod \"207bcd10-295a-42b0-87e7-c30a3127bc5e\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.978435 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-sys\") pod \"207bcd10-295a-42b0-87e7-c30a3127bc5e\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.978478 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-var-locks-brick\") pod \"207bcd10-295a-42b0-87e7-c30a3127bc5e\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.978504 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-etc-nvme\") pod \"207bcd10-295a-42b0-87e7-c30a3127bc5e\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.978540 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-run\") pod \"207bcd10-295a-42b0-87e7-c30a3127bc5e\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.978573 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/207bcd10-295a-42b0-87e7-c30a3127bc5e-scripts\") pod \"207bcd10-295a-42b0-87e7-c30a3127bc5e\" (UID: \"207bcd10-295a-42b0-87e7-c30a3127bc5e\") " Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.979926 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/207bcd10-295a-42b0-87e7-c30a3127bc5e-logs" (OuterVolumeSpecName: "logs") pod "207bcd10-295a-42b0-87e7-c30a3127bc5e" (UID: "207bcd10-295a-42b0-87e7-c30a3127bc5e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.987027 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "207bcd10-295a-42b0-87e7-c30a3127bc5e" (UID: "207bcd10-295a-42b0-87e7-c30a3127bc5e"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.987104 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-dev" (OuterVolumeSpecName: "dev") pod "207bcd10-295a-42b0-87e7-c30a3127bc5e" (UID: "207bcd10-295a-42b0-87e7-c30a3127bc5e"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.987198 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "207bcd10-295a-42b0-87e7-c30a3127bc5e" (UID: "207bcd10-295a-42b0-87e7-c30a3127bc5e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.987202 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-sys" (OuterVolumeSpecName: "sys") pod "207bcd10-295a-42b0-87e7-c30a3127bc5e" (UID: "207bcd10-295a-42b0-87e7-c30a3127bc5e"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.987257 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "207bcd10-295a-42b0-87e7-c30a3127bc5e" (UID: "207bcd10-295a-42b0-87e7-c30a3127bc5e"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.987234 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-run" (OuterVolumeSpecName: "run") pod "207bcd10-295a-42b0-87e7-c30a3127bc5e" (UID: "207bcd10-295a-42b0-87e7-c30a3127bc5e"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.987316 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "207bcd10-295a-42b0-87e7-c30a3127bc5e" (UID: "207bcd10-295a-42b0-87e7-c30a3127bc5e"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.988157 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance-cache") pod "207bcd10-295a-42b0-87e7-c30a3127bc5e" (UID: "207bcd10-295a-42b0-87e7-c30a3127bc5e"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.988944 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/207bcd10-295a-42b0-87e7-c30a3127bc5e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "207bcd10-295a-42b0-87e7-c30a3127bc5e" (UID: "207bcd10-295a-42b0-87e7-c30a3127bc5e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.991730 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage16-crc" (OuterVolumeSpecName: "glance") pod "207bcd10-295a-42b0-87e7-c30a3127bc5e" (UID: "207bcd10-295a-42b0-87e7-c30a3127bc5e"). InnerVolumeSpecName "local-storage16-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.991753 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/207bcd10-295a-42b0-87e7-c30a3127bc5e-scripts" (OuterVolumeSpecName: "scripts") pod "207bcd10-295a-42b0-87e7-c30a3127bc5e" (UID: "207bcd10-295a-42b0-87e7-c30a3127bc5e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:15:36 crc kubenswrapper[5030]: I1128 12:15:36.993001 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/207bcd10-295a-42b0-87e7-c30a3127bc5e-kube-api-access-68vwx" (OuterVolumeSpecName: "kube-api-access-68vwx") pod "207bcd10-295a-42b0-87e7-c30a3127bc5e" (UID: "207bcd10-295a-42b0-87e7-c30a3127bc5e"). InnerVolumeSpecName "kube-api-access-68vwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.009701 5030 scope.go:117] "RemoveContainer" containerID="71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8" Nov 28 12:15:37 crc kubenswrapper[5030]: E1128 12:15:37.010274 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8\": container with ID starting with 71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8 not found: ID does not exist" containerID="71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.010324 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8"} err="failed to get container status \"71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8\": rpc error: code = NotFound desc = could not find container \"71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8\": container with ID starting with 71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8 not found: ID does not exist" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.010359 5030 scope.go:117] "RemoveContainer" containerID="30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26" Nov 28 12:15:37 crc kubenswrapper[5030]: E1128 12:15:37.010973 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26\": container with ID starting with 30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26 not found: ID does not exist" containerID="30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.011014 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26"} err="failed to get container status \"30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26\": rpc error: code = NotFound desc = could not find container \"30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26\": container with ID starting with 30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26 not found: ID does not exist" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.011040 5030 scope.go:117] "RemoveContainer" containerID="ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6" Nov 28 12:15:37 crc kubenswrapper[5030]: E1128 12:15:37.011548 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6\": container with ID starting with ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6 not found: ID does not exist" containerID="ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.011574 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6"} err="failed to get container status \"ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6\": rpc error: code = NotFound desc = could not find container \"ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6\": container with ID starting with ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6 not found: ID does not exist" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.011588 5030 scope.go:117] "RemoveContainer" containerID="71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.011821 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8"} err="failed to get container status \"71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8\": rpc error: code = NotFound desc = could not find container \"71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8\": container with ID starting with 71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8 not found: ID does not exist" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.011843 5030 scope.go:117] "RemoveContainer" containerID="30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.012038 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26"} err="failed to get container status \"30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26\": rpc error: code = NotFound desc = could not find container \"30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26\": container with ID starting with 30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26 not found: ID does not exist" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.012057 5030 scope.go:117] "RemoveContainer" containerID="ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.012311 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6"} err="failed to get container status \"ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6\": rpc error: code = NotFound desc = could not find container \"ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6\": container with ID starting with ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6 not found: ID does not exist" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.012330 5030 scope.go:117] "RemoveContainer" containerID="71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.012649 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8"} err="failed to get container status \"71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8\": rpc error: code = NotFound desc = could not find container \"71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8\": container with ID starting with 71a21f5235ce6bd26d56d587c16d5a202184e5807db44b848e3df477718bb5b8 not found: ID does not exist" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.012706 5030 scope.go:117] "RemoveContainer" containerID="30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.017712 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26"} err="failed to get container status \"30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26\": rpc error: code = NotFound desc = could not find container \"30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26\": container with ID starting with 30b12cf161fc25822a7e93e1fb340b390891535b4f89f2bef9b6ac8910797a26 not found: ID does not exist" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.017769 5030 scope.go:117] "RemoveContainer" containerID="ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.021702 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6"} err="failed to get container status \"ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6\": rpc error: code = NotFound desc = could not find container \"ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6\": container with ID starting with ebed6b404c7d5ce1d4ce0e67c588eb0a2b976bd93012cb333c709e2f5bf75ea6 not found: ID does not exist" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.072953 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/207bcd10-295a-42b0-87e7-c30a3127bc5e-config-data" (OuterVolumeSpecName: "config-data") pod "207bcd10-295a-42b0-87e7-c30a3127bc5e" (UID: "207bcd10-295a-42b0-87e7-c30a3127bc5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.079516 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-etc-nvme\") pod \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.079602 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-config-data\") pod \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.079657 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.079679 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-run\") pod \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.079715 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-logs\") pod \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.079777 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-etc-iscsi\") pod \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.079799 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-httpd-run\") pod \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.079832 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwg9t\" (UniqueName: \"kubernetes.io/projected/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-kube-api-access-lwg9t\") pod \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.079862 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-lib-modules\") pod \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.079892 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-var-locks-brick\") pod \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.079924 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-dev\") pod \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.079978 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-scripts\") pod \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.080064 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.080102 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-sys\") pod \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\" (UID: \"77ffc0b5-a6f5-419b-92cc-21a74d507cc9\") " Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.080177 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "77ffc0b5-a6f5-419b-92cc-21a74d507cc9" (UID: "77ffc0b5-a6f5-419b-92cc-21a74d507cc9"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.080282 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "77ffc0b5-a6f5-419b-92cc-21a74d507cc9" (UID: "77ffc0b5-a6f5-419b-92cc-21a74d507cc9"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.080244 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-sys" (OuterVolumeSpecName: "sys") pod "77ffc0b5-a6f5-419b-92cc-21a74d507cc9" (UID: "77ffc0b5-a6f5-419b-92cc-21a74d507cc9"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.080886 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-dev" (OuterVolumeSpecName: "dev") pod "77ffc0b5-a6f5-419b-92cc-21a74d507cc9" (UID: "77ffc0b5-a6f5-419b-92cc-21a74d507cc9"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.080741 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "77ffc0b5-a6f5-419b-92cc-21a74d507cc9" (UID: "77ffc0b5-a6f5-419b-92cc-21a74d507cc9"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.080830 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-run" (OuterVolumeSpecName: "run") pod "77ffc0b5-a6f5-419b-92cc-21a74d507cc9" (UID: "77ffc0b5-a6f5-419b-92cc-21a74d507cc9"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.080745 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "77ffc0b5-a6f5-419b-92cc-21a74d507cc9" (UID: "77ffc0b5-a6f5-419b-92cc-21a74d507cc9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.080840 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.081017 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.081036 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.081051 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.081063 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.081074 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/207bcd10-295a-42b0-87e7-c30a3127bc5e-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.081085 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.081096 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/207bcd10-295a-42b0-87e7-c30a3127bc5e-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.081140 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.081154 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.081167 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/207bcd10-295a-42b0-87e7-c30a3127bc5e-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.081180 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68vwx\" (UniqueName: \"kubernetes.io/projected/207bcd10-295a-42b0-87e7-c30a3127bc5e-kube-api-access-68vwx\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.081192 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.081204 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/207bcd10-295a-42b0-87e7-c30a3127bc5e-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.081248 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" " Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.081259 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/207bcd10-295a-42b0-87e7-c30a3127bc5e-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.081298 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "77ffc0b5-a6f5-419b-92cc-21a74d507cc9" (UID: "77ffc0b5-a6f5-419b-92cc-21a74d507cc9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.081317 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-logs" (OuterVolumeSpecName: "logs") pod "77ffc0b5-a6f5-419b-92cc-21a74d507cc9" (UID: "77ffc0b5-a6f5-419b-92cc-21a74d507cc9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.084207 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance-cache") pod "77ffc0b5-a6f5-419b-92cc-21a74d507cc9" (UID: "77ffc0b5-a6f5-419b-92cc-21a74d507cc9"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.085515 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-scripts" (OuterVolumeSpecName: "scripts") pod "77ffc0b5-a6f5-419b-92cc-21a74d507cc9" (UID: "77ffc0b5-a6f5-419b-92cc-21a74d507cc9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.088650 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-kube-api-access-lwg9t" (OuterVolumeSpecName: "kube-api-access-lwg9t") pod "77ffc0b5-a6f5-419b-92cc-21a74d507cc9" (UID: "77ffc0b5-a6f5-419b-92cc-21a74d507cc9"). InnerVolumeSpecName "kube-api-access-lwg9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.088928 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "77ffc0b5-a6f5-419b-92cc-21a74d507cc9" (UID: "77ffc0b5-a6f5-419b-92cc-21a74d507cc9"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.101319 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.103437 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage16-crc" (UniqueName: "kubernetes.io/local-volume/local-storage16-crc") on node "crc" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.167113 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-config-data" (OuterVolumeSpecName: "config-data") pod "77ffc0b5-a6f5-419b-92cc-21a74d507cc9" (UID: "77ffc0b5-a6f5-419b-92cc-21a74d507cc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.184582 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.184617 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.184627 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.184640 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.184650 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.184661 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.184670 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.184678 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.184688 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwg9t\" (UniqueName: \"kubernetes.io/projected/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-kube-api-access-lwg9t\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.184698 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.184706 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.184716 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.184724 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.184732 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ffc0b5-a6f5-419b-92cc-21a74d507cc9-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.203522 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.205743 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.220606 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.231867 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.285912 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.286145 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.903559 5030 generic.go:334] "Generic (PLEG): container finished" podID="5739be95-cccf-4519-9582-8af5c8390e00" containerID="e6b1c65bef4b5fdeb274bbf7606c0b5f4b9a6ac6c5f33a4dc245648f080bd23a" exitCode=0 Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.904166 5030 generic.go:334] "Generic (PLEG): container finished" podID="5739be95-cccf-4519-9582-8af5c8390e00" containerID="9f4d90b7f52caf6367e59e2491df26df7a1f58defe943648528dadf025546d77" exitCode=0 Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.903607 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"5739be95-cccf-4519-9582-8af5c8390e00","Type":"ContainerDied","Data":"e6b1c65bef4b5fdeb274bbf7606c0b5f4b9a6ac6c5f33a4dc245648f080bd23a"} Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.904226 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"5739be95-cccf-4519-9582-8af5c8390e00","Type":"ContainerDied","Data":"9f4d90b7f52caf6367e59e2491df26df7a1f58defe943648528dadf025546d77"} Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.904248 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"5739be95-cccf-4519-9582-8af5c8390e00","Type":"ContainerDied","Data":"049386ccb12db1e06f8c031dd8fe18fab6ee9166312f6da2fea7663a93bd66fd"} Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.904187 5030 generic.go:334] "Generic (PLEG): container finished" podID="5739be95-cccf-4519-9582-8af5c8390e00" containerID="049386ccb12db1e06f8c031dd8fe18fab6ee9166312f6da2fea7663a93bd66fd" exitCode=143 Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.910588 5030 generic.go:334] "Generic (PLEG): container finished" podID="0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" containerID="ac65024a908d685eb774c0ad9ff540e105d72912f03632e0c8f0c5c9f21a8e51" exitCode=0 Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.910617 5030 generic.go:334] "Generic (PLEG): container finished" podID="0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" containerID="96fece53dc5c5c331fc7c3a1303c952a1c48b264f2a006a037a40822b01b1e3d" exitCode=0 Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.910629 5030 generic.go:334] "Generic (PLEG): container finished" podID="0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" containerID="a34bfbaec53467c776a11e5f1671ef8fbf58427b5ef030c9d07cdebccf09cda2" exitCode=143 Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.910717 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.910895 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe","Type":"ContainerDied","Data":"ac65024a908d685eb774c0ad9ff540e105d72912f03632e0c8f0c5c9f21a8e51"} Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.913282 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe","Type":"ContainerDied","Data":"96fece53dc5c5c331fc7c3a1303c952a1c48b264f2a006a037a40822b01b1e3d"} Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.913340 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe","Type":"ContainerDied","Data":"a34bfbaec53467c776a11e5f1671ef8fbf58427b5ef030c9d07cdebccf09cda2"} Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.960244 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 28 12:15:37 crc kubenswrapper[5030]: I1128 12:15:37.969544 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.405123 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="207bcd10-295a-42b0-87e7-c30a3127bc5e" path="/var/lib/kubelet/pods/207bcd10-295a-42b0-87e7-c30a3127bc5e/volumes" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.406496 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77ffc0b5-a6f5-419b-92cc-21a74d507cc9" path="/var/lib/kubelet/pods/77ffc0b5-a6f5-419b-92cc-21a74d507cc9/volumes" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.444797 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.449711 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.509355 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-sys\") pod \"5739be95-cccf-4519-9582-8af5c8390e00\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.509431 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-var-locks-brick\") pod \"5739be95-cccf-4519-9582-8af5c8390e00\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.509452 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-sys" (OuterVolumeSpecName: "sys") pod "5739be95-cccf-4519-9582-8af5c8390e00" (UID: "5739be95-cccf-4519-9582-8af5c8390e00"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.509571 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-logs\") pod \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.509589 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "5739be95-cccf-4519-9582-8af5c8390e00" (UID: "5739be95-cccf-4519-9582-8af5c8390e00"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.509627 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5739be95-cccf-4519-9582-8af5c8390e00-httpd-run\") pod \"5739be95-cccf-4519-9582-8af5c8390e00\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.509714 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"5739be95-cccf-4519-9582-8af5c8390e00\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.509771 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.509807 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-etc-nvme\") pod \"5739be95-cccf-4519-9582-8af5c8390e00\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.509839 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-var-locks-brick\") pod \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.509885 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5739be95-cccf-4519-9582-8af5c8390e00-config-data\") pod \"5739be95-cccf-4519-9582-8af5c8390e00\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.509920 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-lib-modules\") pod \"5739be95-cccf-4519-9582-8af5c8390e00\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.509971 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-etc-iscsi\") pod \"5739be95-cccf-4519-9582-8af5c8390e00\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510018 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-etc-nvme\") pod \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510050 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-dev\") pod \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510083 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"5739be95-cccf-4519-9582-8af5c8390e00\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.509906 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "5739be95-cccf-4519-9582-8af5c8390e00" (UID: "5739be95-cccf-4519-9582-8af5c8390e00"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510187 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-dev" (OuterVolumeSpecName: "dev") pod "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" (UID: "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510028 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" (UID: "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510113 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-logs" (OuterVolumeSpecName: "logs") pod "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" (UID: "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510223 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" (UID: "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510123 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "5739be95-cccf-4519-9582-8af5c8390e00" (UID: "5739be95-cccf-4519-9582-8af5c8390e00"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510155 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5739be95-cccf-4519-9582-8af5c8390e00" (UID: "5739be95-cccf-4519-9582-8af5c8390e00"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510144 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6z54\" (UniqueName: \"kubernetes.io/projected/5739be95-cccf-4519-9582-8af5c8390e00-kube-api-access-z6z54\") pod \"5739be95-cccf-4519-9582-8af5c8390e00\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510333 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-httpd-run\") pod \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510369 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-scripts\") pod \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510431 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-etc-iscsi\") pod \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510464 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9mb9\" (UniqueName: \"kubernetes.io/projected/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-kube-api-access-s9mb9\") pod \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510526 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5739be95-cccf-4519-9582-8af5c8390e00-logs\") pod \"5739be95-cccf-4519-9582-8af5c8390e00\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510548 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-lib-modules\") pod \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510571 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-run\") pod \"5739be95-cccf-4519-9582-8af5c8390e00\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510601 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5739be95-cccf-4519-9582-8af5c8390e00-scripts\") pod \"5739be95-cccf-4519-9582-8af5c8390e00\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510661 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-config-data\") pod \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510749 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-run\") pod \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510796 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510828 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-dev\") pod \"5739be95-cccf-4519-9582-8af5c8390e00\" (UID: \"5739be95-cccf-4519-9582-8af5c8390e00\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510874 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-sys\") pod \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\" (UID: \"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe\") " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.510923 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-run" (OuterVolumeSpecName: "run") pod "5739be95-cccf-4519-9582-8af5c8390e00" (UID: "5739be95-cccf-4519-9582-8af5c8390e00"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.511080 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5739be95-cccf-4519-9582-8af5c8390e00-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5739be95-cccf-4519-9582-8af5c8390e00" (UID: "5739be95-cccf-4519-9582-8af5c8390e00"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.511167 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-run" (OuterVolumeSpecName: "run") pod "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" (UID: "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.511401 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5739be95-cccf-4519-9582-8af5c8390e00-logs" (OuterVolumeSpecName: "logs") pod "5739be95-cccf-4519-9582-8af5c8390e00" (UID: "5739be95-cccf-4519-9582-8af5c8390e00"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.511448 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-sys" (OuterVolumeSpecName: "sys") pod "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" (UID: "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.511855 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" (UID: "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.511867 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" (UID: "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.511910 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" (UID: "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.511955 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-dev" (OuterVolumeSpecName: "dev") pod "5739be95-cccf-4519-9582-8af5c8390e00" (UID: "5739be95-cccf-4519-9582-8af5c8390e00"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.512271 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.512290 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.512303 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.512316 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.512330 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.512344 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.512357 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5739be95-cccf-4519-9582-8af5c8390e00-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.512371 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.512386 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.512399 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.512413 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.512425 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.512437 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.512451 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.512483 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.512495 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5739be95-cccf-4519-9582-8af5c8390e00-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.512508 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.512520 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5739be95-cccf-4519-9582-8af5c8390e00-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.522546 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance-cache") pod "5739be95-cccf-4519-9582-8af5c8390e00" (UID: "5739be95-cccf-4519-9582-8af5c8390e00"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.529037 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "5739be95-cccf-4519-9582-8af5c8390e00" (UID: "5739be95-cccf-4519-9582-8af5c8390e00"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.542728 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5739be95-cccf-4519-9582-8af5c8390e00-kube-api-access-z6z54" (OuterVolumeSpecName: "kube-api-access-z6z54") pod "5739be95-cccf-4519-9582-8af5c8390e00" (UID: "5739be95-cccf-4519-9582-8af5c8390e00"). InnerVolumeSpecName "kube-api-access-z6z54". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.544949 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage17-crc" (OuterVolumeSpecName: "glance") pod "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" (UID: "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe"). InnerVolumeSpecName "local-storage17-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.548109 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-scripts" (OuterVolumeSpecName: "scripts") pod "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" (UID: "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.548261 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5739be95-cccf-4519-9582-8af5c8390e00-scripts" (OuterVolumeSpecName: "scripts") pod "5739be95-cccf-4519-9582-8af5c8390e00" (UID: "5739be95-cccf-4519-9582-8af5c8390e00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.548681 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance-cache") pod "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" (UID: "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.551498 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-kube-api-access-s9mb9" (OuterVolumeSpecName: "kube-api-access-s9mb9") pod "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" (UID: "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe"). InnerVolumeSpecName "kube-api-access-s9mb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.614148 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9mb9\" (UniqueName: \"kubernetes.io/projected/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-kube-api-access-s9mb9\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.614184 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5739be95-cccf-4519-9582-8af5c8390e00-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.614214 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.614228 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.614243 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.614255 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.614265 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6z54\" (UniqueName: \"kubernetes.io/projected/5739be95-cccf-4519-9582-8af5c8390e00-kube-api-access-z6z54\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.614276 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.628710 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.638661 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.639451 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage17-crc" (UniqueName: "kubernetes.io/local-volume/local-storage17-crc") on node "crc" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.662387 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.666449 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-config-data" (OuterVolumeSpecName: "config-data") pod "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" (UID: "0ad9fe36-4bc8-4b0d-831f-f4191dabdabe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.683615 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5739be95-cccf-4519-9582-8af5c8390e00-config-data" (OuterVolumeSpecName: "config-data") pod "5739be95-cccf-4519-9582-8af5c8390e00" (UID: "5739be95-cccf-4519-9582-8af5c8390e00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.715032 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.715075 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5739be95-cccf-4519-9582-8af5c8390e00-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.715092 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.715101 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.715109 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.715118 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.921173 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.921170 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"5739be95-cccf-4519-9582-8af5c8390e00","Type":"ContainerDied","Data":"0ff4d24dbab9aead79bea9431d2882bffef9eb58ade40ee10a379895ea6f54cf"} Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.921254 5030 scope.go:117] "RemoveContainer" containerID="e6b1c65bef4b5fdeb274bbf7606c0b5f4b9a6ac6c5f33a4dc245648f080bd23a" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.924819 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"0ad9fe36-4bc8-4b0d-831f-f4191dabdabe","Type":"ContainerDied","Data":"d9197a1dbe93d0d45da6d75167a925838a9a67405cd1a42c4eae4d1392f256a1"} Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.924933 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.946338 5030 scope.go:117] "RemoveContainer" containerID="9f4d90b7f52caf6367e59e2491df26df7a1f58defe943648528dadf025546d77" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.965519 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.974617 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.984046 5030 scope.go:117] "RemoveContainer" containerID="049386ccb12db1e06f8c031dd8fe18fab6ee9166312f6da2fea7663a93bd66fd" Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.985214 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 28 12:15:38 crc kubenswrapper[5030]: I1128 12:15:38.992138 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 28 12:15:39 crc kubenswrapper[5030]: I1128 12:15:39.003213 5030 scope.go:117] "RemoveContainer" containerID="ac65024a908d685eb774c0ad9ff540e105d72912f03632e0c8f0c5c9f21a8e51" Nov 28 12:15:39 crc kubenswrapper[5030]: I1128 12:15:39.034572 5030 scope.go:117] "RemoveContainer" containerID="96fece53dc5c5c331fc7c3a1303c952a1c48b264f2a006a037a40822b01b1e3d" Nov 28 12:15:39 crc kubenswrapper[5030]: I1128 12:15:39.063294 5030 scope.go:117] "RemoveContainer" containerID="a34bfbaec53467c776a11e5f1671ef8fbf58427b5ef030c9d07cdebccf09cda2" Nov 28 12:15:39 crc kubenswrapper[5030]: I1128 12:15:39.801006 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:15:39 crc kubenswrapper[5030]: I1128 12:15:39.801521 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="395ef274-d899-4e7e-ab5b-558771ced96d" containerName="glance-log" containerID="cri-o://0322666126b5f42f4bc2997c2e109f57de95365535161e2d7fb9fac5f1d027ad" gracePeriod=30 Nov 28 12:15:39 crc kubenswrapper[5030]: I1128 12:15:39.801737 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="395ef274-d899-4e7e-ab5b-558771ced96d" containerName="glance-api" containerID="cri-o://ef1d79c7af7c140a1262dc26eac1b0ec9b03281fe36a639696ba0a48a5bae4fc" gracePeriod=30 Nov 28 12:15:39 crc kubenswrapper[5030]: I1128 12:15:39.801832 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="395ef274-d899-4e7e-ab5b-558771ced96d" containerName="glance-httpd" containerID="cri-o://8322aa834f96d4e334cf47ba132c9a53b8908bb9de30a7fb2ae235feb7247445" gracePeriod=30 Nov 28 12:15:39 crc kubenswrapper[5030]: I1128 12:15:39.950562 5030 generic.go:334] "Generic (PLEG): container finished" podID="395ef274-d899-4e7e-ab5b-558771ced96d" containerID="0322666126b5f42f4bc2997c2e109f57de95365535161e2d7fb9fac5f1d027ad" exitCode=143 Nov 28 12:15:39 crc kubenswrapper[5030]: I1128 12:15:39.950660 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"395ef274-d899-4e7e-ab5b-558771ced96d","Type":"ContainerDied","Data":"0322666126b5f42f4bc2997c2e109f57de95365535161e2d7fb9fac5f1d027ad"} Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.303514 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.303920 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" containerName="glance-log" containerID="cri-o://53bf829139b37feb45ea2e02f34896c09b47a8ff951a01e7a45c94b178a7d230" gracePeriod=30 Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.304039 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" containerName="glance-api" containerID="cri-o://1a215e4d076a0727c7e4c026a0b52bd141400105ca9c9a93d398f909d213576d" gracePeriod=30 Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.304061 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" containerName="glance-httpd" containerID="cri-o://405c1c3048fd34d8bbead6fbcf4674da6085e22d891a970e70dde59903a0d920" gracePeriod=30 Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.405027 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" path="/var/lib/kubelet/pods/0ad9fe36-4bc8-4b0d-831f-f4191dabdabe/volumes" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.406256 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5739be95-cccf-4519-9582-8af5c8390e00" path="/var/lib/kubelet/pods/5739be95-cccf-4519-9582-8af5c8390e00/volumes" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.656397 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.757978 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-lib-modules\") pod \"395ef274-d899-4e7e-ab5b-558771ced96d\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.758034 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"395ef274-d899-4e7e-ab5b-558771ced96d\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.758078 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-dev\") pod \"395ef274-d899-4e7e-ab5b-558771ced96d\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.758104 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395ef274-d899-4e7e-ab5b-558771ced96d-logs\") pod \"395ef274-d899-4e7e-ab5b-558771ced96d\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.758114 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "395ef274-d899-4e7e-ab5b-558771ced96d" (UID: "395ef274-d899-4e7e-ab5b-558771ced96d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.758124 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-etc-iscsi\") pod \"395ef274-d899-4e7e-ab5b-558771ced96d\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.758151 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "395ef274-d899-4e7e-ab5b-558771ced96d" (UID: "395ef274-d899-4e7e-ab5b-558771ced96d"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.758175 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/395ef274-d899-4e7e-ab5b-558771ced96d-httpd-run\") pod \"395ef274-d899-4e7e-ab5b-558771ced96d\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.758223 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"395ef274-d899-4e7e-ab5b-558771ced96d\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.758243 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-etc-nvme\") pod \"395ef274-d899-4e7e-ab5b-558771ced96d\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.758316 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395ef274-d899-4e7e-ab5b-558771ced96d-scripts\") pod \"395ef274-d899-4e7e-ab5b-558771ced96d\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.758334 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395ef274-d899-4e7e-ab5b-558771ced96d-config-data\") pod \"395ef274-d899-4e7e-ab5b-558771ced96d\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.758366 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-sys\") pod \"395ef274-d899-4e7e-ab5b-558771ced96d\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.758392 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7hnd\" (UniqueName: \"kubernetes.io/projected/395ef274-d899-4e7e-ab5b-558771ced96d-kube-api-access-q7hnd\") pod \"395ef274-d899-4e7e-ab5b-558771ced96d\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.758408 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-var-locks-brick\") pod \"395ef274-d899-4e7e-ab5b-558771ced96d\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.758495 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-run\") pod \"395ef274-d899-4e7e-ab5b-558771ced96d\" (UID: \"395ef274-d899-4e7e-ab5b-558771ced96d\") " Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.758662 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-dev" (OuterVolumeSpecName: "dev") pod "395ef274-d899-4e7e-ab5b-558771ced96d" (UID: "395ef274-d899-4e7e-ab5b-558771ced96d"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.758724 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-run" (OuterVolumeSpecName: "run") pod "395ef274-d899-4e7e-ab5b-558771ced96d" (UID: "395ef274-d899-4e7e-ab5b-558771ced96d"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.759018 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.759039 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.759052 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.759062 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.759052 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "395ef274-d899-4e7e-ab5b-558771ced96d" (UID: "395ef274-d899-4e7e-ab5b-558771ced96d"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.759550 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-sys" (OuterVolumeSpecName: "sys") pod "395ef274-d899-4e7e-ab5b-558771ced96d" (UID: "395ef274-d899-4e7e-ab5b-558771ced96d"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.759892 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "395ef274-d899-4e7e-ab5b-558771ced96d" (UID: "395ef274-d899-4e7e-ab5b-558771ced96d"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.766938 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/395ef274-d899-4e7e-ab5b-558771ced96d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "395ef274-d899-4e7e-ab5b-558771ced96d" (UID: "395ef274-d899-4e7e-ab5b-558771ced96d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.779892 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/395ef274-d899-4e7e-ab5b-558771ced96d-logs" (OuterVolumeSpecName: "logs") pod "395ef274-d899-4e7e-ab5b-558771ced96d" (UID: "395ef274-d899-4e7e-ab5b-558771ced96d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.793287 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/395ef274-d899-4e7e-ab5b-558771ced96d-scripts" (OuterVolumeSpecName: "scripts") pod "395ef274-d899-4e7e-ab5b-558771ced96d" (UID: "395ef274-d899-4e7e-ab5b-558771ced96d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.796195 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "395ef274-d899-4e7e-ab5b-558771ced96d" (UID: "395ef274-d899-4e7e-ab5b-558771ced96d"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.799681 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance-cache") pod "395ef274-d899-4e7e-ab5b-558771ced96d" (UID: "395ef274-d899-4e7e-ab5b-558771ced96d"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.802712 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/395ef274-d899-4e7e-ab5b-558771ced96d-kube-api-access-q7hnd" (OuterVolumeSpecName: "kube-api-access-q7hnd") pod "395ef274-d899-4e7e-ab5b-558771ced96d" (UID: "395ef274-d899-4e7e-ab5b-558771ced96d"). InnerVolumeSpecName "kube-api-access-q7hnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.860938 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395ef274-d899-4e7e-ab5b-558771ced96d-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.860977 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.860988 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7hnd\" (UniqueName: \"kubernetes.io/projected/395ef274-d899-4e7e-ab5b-558771ced96d-kube-api-access-q7hnd\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.861002 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.861056 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.861070 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395ef274-d899-4e7e-ab5b-558771ced96d-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.861082 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/395ef274-d899-4e7e-ab5b-558771ced96d-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.862596 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.862664 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/395ef274-d899-4e7e-ab5b-558771ced96d-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.874046 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.875154 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.889701 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/395ef274-d899-4e7e-ab5b-558771ced96d-config-data" (OuterVolumeSpecName: "config-data") pod "395ef274-d899-4e7e-ab5b-558771ced96d" (UID: "395ef274-d899-4e7e-ab5b-558771ced96d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.964419 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.964457 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395ef274-d899-4e7e-ab5b-558771ced96d-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.964500 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.966100 5030 generic.go:334] "Generic (PLEG): container finished" podID="fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" containerID="405c1c3048fd34d8bbead6fbcf4674da6085e22d891a970e70dde59903a0d920" exitCode=0 Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.966154 5030 generic.go:334] "Generic (PLEG): container finished" podID="fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" containerID="53bf829139b37feb45ea2e02f34896c09b47a8ff951a01e7a45c94b178a7d230" exitCode=143 Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.966194 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b","Type":"ContainerDied","Data":"405c1c3048fd34d8bbead6fbcf4674da6085e22d891a970e70dde59903a0d920"} Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.966250 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b","Type":"ContainerDied","Data":"53bf829139b37feb45ea2e02f34896c09b47a8ff951a01e7a45c94b178a7d230"} Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.980319 5030 generic.go:334] "Generic (PLEG): container finished" podID="395ef274-d899-4e7e-ab5b-558771ced96d" containerID="ef1d79c7af7c140a1262dc26eac1b0ec9b03281fe36a639696ba0a48a5bae4fc" exitCode=0 Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.980382 5030 generic.go:334] "Generic (PLEG): container finished" podID="395ef274-d899-4e7e-ab5b-558771ced96d" containerID="8322aa834f96d4e334cf47ba132c9a53b8908bb9de30a7fb2ae235feb7247445" exitCode=0 Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.980416 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"395ef274-d899-4e7e-ab5b-558771ced96d","Type":"ContainerDied","Data":"ef1d79c7af7c140a1262dc26eac1b0ec9b03281fe36a639696ba0a48a5bae4fc"} Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.980481 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"395ef274-d899-4e7e-ab5b-558771ced96d","Type":"ContainerDied","Data":"8322aa834f96d4e334cf47ba132c9a53b8908bb9de30a7fb2ae235feb7247445"} Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.980505 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"395ef274-d899-4e7e-ab5b-558771ced96d","Type":"ContainerDied","Data":"d5cbc57caf6a0870ebf73b427a0162167ac2d82be6152a5d8a2857ec450cdba4"} Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.980514 5030 scope.go:117] "RemoveContainer" containerID="ef1d79c7af7c140a1262dc26eac1b0ec9b03281fe36a639696ba0a48a5bae4fc" Nov 28 12:15:40 crc kubenswrapper[5030]: I1128 12:15:40.980457 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.005132 5030 scope.go:117] "RemoveContainer" containerID="8322aa834f96d4e334cf47ba132c9a53b8908bb9de30a7fb2ae235feb7247445" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.027528 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.037223 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.037809 5030 scope.go:117] "RemoveContainer" containerID="0322666126b5f42f4bc2997c2e109f57de95365535161e2d7fb9fac5f1d027ad" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.066063 5030 scope.go:117] "RemoveContainer" containerID="ef1d79c7af7c140a1262dc26eac1b0ec9b03281fe36a639696ba0a48a5bae4fc" Nov 28 12:15:41 crc kubenswrapper[5030]: E1128 12:15:41.066792 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef1d79c7af7c140a1262dc26eac1b0ec9b03281fe36a639696ba0a48a5bae4fc\": container with ID starting with ef1d79c7af7c140a1262dc26eac1b0ec9b03281fe36a639696ba0a48a5bae4fc not found: ID does not exist" containerID="ef1d79c7af7c140a1262dc26eac1b0ec9b03281fe36a639696ba0a48a5bae4fc" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.066867 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef1d79c7af7c140a1262dc26eac1b0ec9b03281fe36a639696ba0a48a5bae4fc"} err="failed to get container status \"ef1d79c7af7c140a1262dc26eac1b0ec9b03281fe36a639696ba0a48a5bae4fc\": rpc error: code = NotFound desc = could not find container \"ef1d79c7af7c140a1262dc26eac1b0ec9b03281fe36a639696ba0a48a5bae4fc\": container with ID starting with ef1d79c7af7c140a1262dc26eac1b0ec9b03281fe36a639696ba0a48a5bae4fc not found: ID does not exist" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.066916 5030 scope.go:117] "RemoveContainer" containerID="8322aa834f96d4e334cf47ba132c9a53b8908bb9de30a7fb2ae235feb7247445" Nov 28 12:15:41 crc kubenswrapper[5030]: E1128 12:15:41.067459 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8322aa834f96d4e334cf47ba132c9a53b8908bb9de30a7fb2ae235feb7247445\": container with ID starting with 8322aa834f96d4e334cf47ba132c9a53b8908bb9de30a7fb2ae235feb7247445 not found: ID does not exist" containerID="8322aa834f96d4e334cf47ba132c9a53b8908bb9de30a7fb2ae235feb7247445" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.067550 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8322aa834f96d4e334cf47ba132c9a53b8908bb9de30a7fb2ae235feb7247445"} err="failed to get container status \"8322aa834f96d4e334cf47ba132c9a53b8908bb9de30a7fb2ae235feb7247445\": rpc error: code = NotFound desc = could not find container \"8322aa834f96d4e334cf47ba132c9a53b8908bb9de30a7fb2ae235feb7247445\": container with ID starting with 8322aa834f96d4e334cf47ba132c9a53b8908bb9de30a7fb2ae235feb7247445 not found: ID does not exist" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.067581 5030 scope.go:117] "RemoveContainer" containerID="0322666126b5f42f4bc2997c2e109f57de95365535161e2d7fb9fac5f1d027ad" Nov 28 12:15:41 crc kubenswrapper[5030]: E1128 12:15:41.067972 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0322666126b5f42f4bc2997c2e109f57de95365535161e2d7fb9fac5f1d027ad\": container with ID starting with 0322666126b5f42f4bc2997c2e109f57de95365535161e2d7fb9fac5f1d027ad not found: ID does not exist" containerID="0322666126b5f42f4bc2997c2e109f57de95365535161e2d7fb9fac5f1d027ad" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.068020 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0322666126b5f42f4bc2997c2e109f57de95365535161e2d7fb9fac5f1d027ad"} err="failed to get container status \"0322666126b5f42f4bc2997c2e109f57de95365535161e2d7fb9fac5f1d027ad\": rpc error: code = NotFound desc = could not find container \"0322666126b5f42f4bc2997c2e109f57de95365535161e2d7fb9fac5f1d027ad\": container with ID starting with 0322666126b5f42f4bc2997c2e109f57de95365535161e2d7fb9fac5f1d027ad not found: ID does not exist" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.068048 5030 scope.go:117] "RemoveContainer" containerID="ef1d79c7af7c140a1262dc26eac1b0ec9b03281fe36a639696ba0a48a5bae4fc" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.068395 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef1d79c7af7c140a1262dc26eac1b0ec9b03281fe36a639696ba0a48a5bae4fc"} err="failed to get container status \"ef1d79c7af7c140a1262dc26eac1b0ec9b03281fe36a639696ba0a48a5bae4fc\": rpc error: code = NotFound desc = could not find container \"ef1d79c7af7c140a1262dc26eac1b0ec9b03281fe36a639696ba0a48a5bae4fc\": container with ID starting with ef1d79c7af7c140a1262dc26eac1b0ec9b03281fe36a639696ba0a48a5bae4fc not found: ID does not exist" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.068442 5030 scope.go:117] "RemoveContainer" containerID="8322aa834f96d4e334cf47ba132c9a53b8908bb9de30a7fb2ae235feb7247445" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.068902 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8322aa834f96d4e334cf47ba132c9a53b8908bb9de30a7fb2ae235feb7247445"} err="failed to get container status \"8322aa834f96d4e334cf47ba132c9a53b8908bb9de30a7fb2ae235feb7247445\": rpc error: code = NotFound desc = could not find container \"8322aa834f96d4e334cf47ba132c9a53b8908bb9de30a7fb2ae235feb7247445\": container with ID starting with 8322aa834f96d4e334cf47ba132c9a53b8908bb9de30a7fb2ae235feb7247445 not found: ID does not exist" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.068943 5030 scope.go:117] "RemoveContainer" containerID="0322666126b5f42f4bc2997c2e109f57de95365535161e2d7fb9fac5f1d027ad" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.069323 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0322666126b5f42f4bc2997c2e109f57de95365535161e2d7fb9fac5f1d027ad"} err="failed to get container status \"0322666126b5f42f4bc2997c2e109f57de95365535161e2d7fb9fac5f1d027ad\": rpc error: code = NotFound desc = could not find container \"0322666126b5f42f4bc2997c2e109f57de95365535161e2d7fb9fac5f1d027ad\": container with ID starting with 0322666126b5f42f4bc2997c2e109f57de95365535161e2d7fb9fac5f1d027ad not found: ID does not exist" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.387331 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.471012 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-httpd-run\") pod \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.471089 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.471197 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-var-locks-brick\") pod \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.471312 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" (UID: "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.471228 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-etc-iscsi\") pod \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.471397 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-sys\") pod \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.471456 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" (UID: "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.471541 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pgcs\" (UniqueName: \"kubernetes.io/projected/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-kube-api-access-7pgcs\") pod \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.471582 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-config-data\") pod \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.471576 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" (UID: "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.471609 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-scripts\") pod \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.471648 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-dev\") pod \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.471672 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-lib-modules\") pod \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.471698 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-logs\") pod \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.471724 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.471763 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-etc-nvme\") pod \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.471801 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-run\") pod \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\" (UID: \"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b\") " Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.472181 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.472196 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.472209 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.472413 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-sys" (OuterVolumeSpecName: "sys") pod "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" (UID: "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.472505 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-run" (OuterVolumeSpecName: "run") pod "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" (UID: "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.473083 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" (UID: "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.473193 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-dev" (OuterVolumeSpecName: "dev") pod "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" (UID: "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.473293 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" (UID: "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.476104 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage14-crc" (OuterVolumeSpecName: "glance") pod "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" (UID: "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b"). InnerVolumeSpecName "local-storage14-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.476738 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-logs" (OuterVolumeSpecName: "logs") pod "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" (UID: "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.476942 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance-cache") pod "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" (UID: "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.476993 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-kube-api-access-7pgcs" (OuterVolumeSpecName: "kube-api-access-7pgcs") pod "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" (UID: "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b"). InnerVolumeSpecName "kube-api-access-7pgcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.477605 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-scripts" (OuterVolumeSpecName: "scripts") pod "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" (UID: "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.546613 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-config-data" (OuterVolumeSpecName: "config-data") pod "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" (UID: "fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.574513 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.574549 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.574588 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" " Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.574600 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.574610 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pgcs\" (UniqueName: \"kubernetes.io/projected/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-kube-api-access-7pgcs\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.574624 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.574633 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.574643 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.574654 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.574662 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.574679 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.594200 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.602995 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage14-crc" (UniqueName: "kubernetes.io/local-volume/local-storage14-crc") on node "crc" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.677192 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.677763 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.993950 5030 generic.go:334] "Generic (PLEG): container finished" podID="fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" containerID="1a215e4d076a0727c7e4c026a0b52bd141400105ca9c9a93d398f909d213576d" exitCode=0 Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.994018 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b","Type":"ContainerDied","Data":"1a215e4d076a0727c7e4c026a0b52bd141400105ca9c9a93d398f909d213576d"} Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.994036 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.994060 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b","Type":"ContainerDied","Data":"a951c1591194cc31c9904badb9f0fe6cff409fc3c103c8a79af2b8f08d717c86"} Nov 28 12:15:41 crc kubenswrapper[5030]: I1128 12:15:41.994086 5030 scope.go:117] "RemoveContainer" containerID="1a215e4d076a0727c7e4c026a0b52bd141400105ca9c9a93d398f909d213576d" Nov 28 12:15:42 crc kubenswrapper[5030]: I1128 12:15:42.020921 5030 scope.go:117] "RemoveContainer" containerID="405c1c3048fd34d8bbead6fbcf4674da6085e22d891a970e70dde59903a0d920" Nov 28 12:15:42 crc kubenswrapper[5030]: I1128 12:15:42.023328 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:15:42 crc kubenswrapper[5030]: I1128 12:15:42.035058 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:15:42 crc kubenswrapper[5030]: I1128 12:15:42.047052 5030 scope.go:117] "RemoveContainer" containerID="53bf829139b37feb45ea2e02f34896c09b47a8ff951a01e7a45c94b178a7d230" Nov 28 12:15:42 crc kubenswrapper[5030]: I1128 12:15:42.062857 5030 scope.go:117] "RemoveContainer" containerID="1a215e4d076a0727c7e4c026a0b52bd141400105ca9c9a93d398f909d213576d" Nov 28 12:15:42 crc kubenswrapper[5030]: E1128 12:15:42.063452 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a215e4d076a0727c7e4c026a0b52bd141400105ca9c9a93d398f909d213576d\": container with ID starting with 1a215e4d076a0727c7e4c026a0b52bd141400105ca9c9a93d398f909d213576d not found: ID does not exist" containerID="1a215e4d076a0727c7e4c026a0b52bd141400105ca9c9a93d398f909d213576d" Nov 28 12:15:42 crc kubenswrapper[5030]: I1128 12:15:42.063491 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a215e4d076a0727c7e4c026a0b52bd141400105ca9c9a93d398f909d213576d"} err="failed to get container status \"1a215e4d076a0727c7e4c026a0b52bd141400105ca9c9a93d398f909d213576d\": rpc error: code = NotFound desc = could not find container \"1a215e4d076a0727c7e4c026a0b52bd141400105ca9c9a93d398f909d213576d\": container with ID starting with 1a215e4d076a0727c7e4c026a0b52bd141400105ca9c9a93d398f909d213576d not found: ID does not exist" Nov 28 12:15:42 crc kubenswrapper[5030]: I1128 12:15:42.063535 5030 scope.go:117] "RemoveContainer" containerID="405c1c3048fd34d8bbead6fbcf4674da6085e22d891a970e70dde59903a0d920" Nov 28 12:15:42 crc kubenswrapper[5030]: E1128 12:15:42.063862 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"405c1c3048fd34d8bbead6fbcf4674da6085e22d891a970e70dde59903a0d920\": container with ID starting with 405c1c3048fd34d8bbead6fbcf4674da6085e22d891a970e70dde59903a0d920 not found: ID does not exist" containerID="405c1c3048fd34d8bbead6fbcf4674da6085e22d891a970e70dde59903a0d920" Nov 28 12:15:42 crc kubenswrapper[5030]: I1128 12:15:42.063896 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"405c1c3048fd34d8bbead6fbcf4674da6085e22d891a970e70dde59903a0d920"} err="failed to get container status \"405c1c3048fd34d8bbead6fbcf4674da6085e22d891a970e70dde59903a0d920\": rpc error: code = NotFound desc = could not find container \"405c1c3048fd34d8bbead6fbcf4674da6085e22d891a970e70dde59903a0d920\": container with ID starting with 405c1c3048fd34d8bbead6fbcf4674da6085e22d891a970e70dde59903a0d920 not found: ID does not exist" Nov 28 12:15:42 crc kubenswrapper[5030]: I1128 12:15:42.063913 5030 scope.go:117] "RemoveContainer" containerID="53bf829139b37feb45ea2e02f34896c09b47a8ff951a01e7a45c94b178a7d230" Nov 28 12:15:42 crc kubenswrapper[5030]: E1128 12:15:42.064422 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53bf829139b37feb45ea2e02f34896c09b47a8ff951a01e7a45c94b178a7d230\": container with ID starting with 53bf829139b37feb45ea2e02f34896c09b47a8ff951a01e7a45c94b178a7d230 not found: ID does not exist" containerID="53bf829139b37feb45ea2e02f34896c09b47a8ff951a01e7a45c94b178a7d230" Nov 28 12:15:42 crc kubenswrapper[5030]: I1128 12:15:42.064451 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53bf829139b37feb45ea2e02f34896c09b47a8ff951a01e7a45c94b178a7d230"} err="failed to get container status \"53bf829139b37feb45ea2e02f34896c09b47a8ff951a01e7a45c94b178a7d230\": rpc error: code = NotFound desc = could not find container \"53bf829139b37feb45ea2e02f34896c09b47a8ff951a01e7a45c94b178a7d230\": container with ID starting with 53bf829139b37feb45ea2e02f34896c09b47a8ff951a01e7a45c94b178a7d230 not found: ID does not exist" Nov 28 12:15:42 crc kubenswrapper[5030]: I1128 12:15:42.407545 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="395ef274-d899-4e7e-ab5b-558771ced96d" path="/var/lib/kubelet/pods/395ef274-d899-4e7e-ab5b-558771ced96d/volumes" Nov 28 12:15:42 crc kubenswrapper[5030]: I1128 12:15:42.408388 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" path="/var/lib/kubelet/pods/fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b/volumes" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.092940 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-5s47r"] Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.101064 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-5s47r"] Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176071 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glanced958-account-delete-mscnw"] Nov 28 12:15:43 crc kubenswrapper[5030]: E1128 12:15:43.176392 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="207bcd10-295a-42b0-87e7-c30a3127bc5e" containerName="glance-api" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176406 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="207bcd10-295a-42b0-87e7-c30a3127bc5e" containerName="glance-api" Nov 28 12:15:43 crc kubenswrapper[5030]: E1128 12:15:43.176421 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395ef274-d899-4e7e-ab5b-558771ced96d" containerName="glance-api" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176427 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="395ef274-d899-4e7e-ab5b-558771ced96d" containerName="glance-api" Nov 28 12:15:43 crc kubenswrapper[5030]: E1128 12:15:43.176451 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" containerName="glance-log" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176458 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" containerName="glance-log" Nov 28 12:15:43 crc kubenswrapper[5030]: E1128 12:15:43.176470 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" containerName="glance-log" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176477 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" containerName="glance-log" Nov 28 12:15:43 crc kubenswrapper[5030]: E1128 12:15:43.176506 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395ef274-d899-4e7e-ab5b-558771ced96d" containerName="glance-log" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176512 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="395ef274-d899-4e7e-ab5b-558771ced96d" containerName="glance-log" Nov 28 12:15:43 crc kubenswrapper[5030]: E1128 12:15:43.176522 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="207bcd10-295a-42b0-87e7-c30a3127bc5e" containerName="glance-httpd" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176530 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="207bcd10-295a-42b0-87e7-c30a3127bc5e" containerName="glance-httpd" Nov 28 12:15:43 crc kubenswrapper[5030]: E1128 12:15:43.176541 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" containerName="glance-httpd" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176548 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" containerName="glance-httpd" Nov 28 12:15:43 crc kubenswrapper[5030]: E1128 12:15:43.176561 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5739be95-cccf-4519-9582-8af5c8390e00" containerName="glance-httpd" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176567 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="5739be95-cccf-4519-9582-8af5c8390e00" containerName="glance-httpd" Nov 28 12:15:43 crc kubenswrapper[5030]: E1128 12:15:43.176579 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77ffc0b5-a6f5-419b-92cc-21a74d507cc9" containerName="glance-httpd" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176586 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="77ffc0b5-a6f5-419b-92cc-21a74d507cc9" containerName="glance-httpd" Nov 28 12:15:43 crc kubenswrapper[5030]: E1128 12:15:43.176597 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77ffc0b5-a6f5-419b-92cc-21a74d507cc9" containerName="glance-api" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176604 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="77ffc0b5-a6f5-419b-92cc-21a74d507cc9" containerName="glance-api" Nov 28 12:15:43 crc kubenswrapper[5030]: E1128 12:15:43.176618 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395ef274-d899-4e7e-ab5b-558771ced96d" containerName="glance-httpd" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176625 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="395ef274-d899-4e7e-ab5b-558771ced96d" containerName="glance-httpd" Nov 28 12:15:43 crc kubenswrapper[5030]: E1128 12:15:43.176635 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5739be95-cccf-4519-9582-8af5c8390e00" containerName="glance-api" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176642 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="5739be95-cccf-4519-9582-8af5c8390e00" containerName="glance-api" Nov 28 12:15:43 crc kubenswrapper[5030]: E1128 12:15:43.176653 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" containerName="glance-api" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176667 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" containerName="glance-api" Nov 28 12:15:43 crc kubenswrapper[5030]: E1128 12:15:43.176680 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77ffc0b5-a6f5-419b-92cc-21a74d507cc9" containerName="glance-log" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176687 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="77ffc0b5-a6f5-419b-92cc-21a74d507cc9" containerName="glance-log" Nov 28 12:15:43 crc kubenswrapper[5030]: E1128 12:15:43.176699 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" containerName="glance-api" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176705 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" containerName="glance-api" Nov 28 12:15:43 crc kubenswrapper[5030]: E1128 12:15:43.176713 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5739be95-cccf-4519-9582-8af5c8390e00" containerName="glance-log" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176719 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="5739be95-cccf-4519-9582-8af5c8390e00" containerName="glance-log" Nov 28 12:15:43 crc kubenswrapper[5030]: E1128 12:15:43.176733 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="207bcd10-295a-42b0-87e7-c30a3127bc5e" containerName="glance-log" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176740 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="207bcd10-295a-42b0-87e7-c30a3127bc5e" containerName="glance-log" Nov 28 12:15:43 crc kubenswrapper[5030]: E1128 12:15:43.176748 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" containerName="glance-httpd" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176755 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" containerName="glance-httpd" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176885 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="207bcd10-295a-42b0-87e7-c30a3127bc5e" containerName="glance-api" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176897 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" containerName="glance-api" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176905 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" containerName="glance-log" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176915 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="5739be95-cccf-4519-9582-8af5c8390e00" containerName="glance-api" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176926 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="5739be95-cccf-4519-9582-8af5c8390e00" containerName="glance-log" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176934 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="77ffc0b5-a6f5-419b-92cc-21a74d507cc9" containerName="glance-httpd" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176944 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="77ffc0b5-a6f5-419b-92cc-21a74d507cc9" containerName="glance-api" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176950 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="395ef274-d899-4e7e-ab5b-558771ced96d" containerName="glance-log" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176956 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="207bcd10-295a-42b0-87e7-c30a3127bc5e" containerName="glance-httpd" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176966 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" containerName="glance-httpd" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176975 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="5739be95-cccf-4519-9582-8af5c8390e00" containerName="glance-httpd" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176986 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="77ffc0b5-a6f5-419b-92cc-21a74d507cc9" containerName="glance-log" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.176994 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="395ef274-d899-4e7e-ab5b-558771ced96d" containerName="glance-api" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.177001 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" containerName="glance-log" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.177009 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad9fe36-4bc8-4b0d-831f-f4191dabdabe" containerName="glance-api" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.177018 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="207bcd10-295a-42b0-87e7-c30a3127bc5e" containerName="glance-log" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.177025 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdbd7b5d-1c83-4a0e-88c1-8f3846109f6b" containerName="glance-httpd" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.177035 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="395ef274-d899-4e7e-ab5b-558771ced96d" containerName="glance-httpd" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.177589 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glanced958-account-delete-mscnw" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.194223 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glanced958-account-delete-mscnw"] Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.201861 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2607d43-f222-4643-a033-0e170f4fec9a-operator-scripts\") pod \"glanced958-account-delete-mscnw\" (UID: \"d2607d43-f222-4643-a033-0e170f4fec9a\") " pod="glance-kuttl-tests/glanced958-account-delete-mscnw" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.201941 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdxfk\" (UniqueName: \"kubernetes.io/projected/d2607d43-f222-4643-a033-0e170f4fec9a-kube-api-access-jdxfk\") pod \"glanced958-account-delete-mscnw\" (UID: \"d2607d43-f222-4643-a033-0e170f4fec9a\") " pod="glance-kuttl-tests/glanced958-account-delete-mscnw" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.303639 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdxfk\" (UniqueName: \"kubernetes.io/projected/d2607d43-f222-4643-a033-0e170f4fec9a-kube-api-access-jdxfk\") pod \"glanced958-account-delete-mscnw\" (UID: \"d2607d43-f222-4643-a033-0e170f4fec9a\") " pod="glance-kuttl-tests/glanced958-account-delete-mscnw" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.303802 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2607d43-f222-4643-a033-0e170f4fec9a-operator-scripts\") pod \"glanced958-account-delete-mscnw\" (UID: \"d2607d43-f222-4643-a033-0e170f4fec9a\") " pod="glance-kuttl-tests/glanced958-account-delete-mscnw" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.304916 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2607d43-f222-4643-a033-0e170f4fec9a-operator-scripts\") pod \"glanced958-account-delete-mscnw\" (UID: \"d2607d43-f222-4643-a033-0e170f4fec9a\") " pod="glance-kuttl-tests/glanced958-account-delete-mscnw" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.326036 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdxfk\" (UniqueName: \"kubernetes.io/projected/d2607d43-f222-4643-a033-0e170f4fec9a-kube-api-access-jdxfk\") pod \"glanced958-account-delete-mscnw\" (UID: \"d2607d43-f222-4643-a033-0e170f4fec9a\") " pod="glance-kuttl-tests/glanced958-account-delete-mscnw" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.494082 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glanced958-account-delete-mscnw" Nov 28 12:15:43 crc kubenswrapper[5030]: I1128 12:15:43.801023 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glanced958-account-delete-mscnw"] Nov 28 12:15:44 crc kubenswrapper[5030]: I1128 12:15:44.015222 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glanced958-account-delete-mscnw" event={"ID":"d2607d43-f222-4643-a033-0e170f4fec9a","Type":"ContainerStarted","Data":"3bd6815b1e5cb4fad77b7b632eb82e68678b930a1ad94a41bb4770ea5736afb8"} Nov 28 12:15:44 crc kubenswrapper[5030]: I1128 12:15:44.015304 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glanced958-account-delete-mscnw" event={"ID":"d2607d43-f222-4643-a033-0e170f4fec9a","Type":"ContainerStarted","Data":"9286b3b865021df3e8c7051fd39b37cfa363e37354b4e41d2414617d7401bca0"} Nov 28 12:15:44 crc kubenswrapper[5030]: I1128 12:15:44.032237 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glanced958-account-delete-mscnw" podStartSLOduration=1.032217864 podStartE2EDuration="1.032217864s" podCreationTimestamp="2025-11-28 12:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:15:44.029692695 +0000 UTC m=+1361.971435378" watchObservedRunningTime="2025-11-28 12:15:44.032217864 +0000 UTC m=+1361.973960547" Nov 28 12:15:44 crc kubenswrapper[5030]: I1128 12:15:44.408959 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44250726-643f-4606-ab0d-d7a89342fea0" path="/var/lib/kubelet/pods/44250726-643f-4606-ab0d-d7a89342fea0/volumes" Nov 28 12:15:45 crc kubenswrapper[5030]: I1128 12:15:45.026704 5030 generic.go:334] "Generic (PLEG): container finished" podID="d2607d43-f222-4643-a033-0e170f4fec9a" containerID="3bd6815b1e5cb4fad77b7b632eb82e68678b930a1ad94a41bb4770ea5736afb8" exitCode=0 Nov 28 12:15:45 crc kubenswrapper[5030]: I1128 12:15:45.026761 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glanced958-account-delete-mscnw" event={"ID":"d2607d43-f222-4643-a033-0e170f4fec9a","Type":"ContainerDied","Data":"3bd6815b1e5cb4fad77b7b632eb82e68678b930a1ad94a41bb4770ea5736afb8"} Nov 28 12:15:46 crc kubenswrapper[5030]: I1128 12:15:46.398789 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glanced958-account-delete-mscnw" Nov 28 12:15:46 crc kubenswrapper[5030]: I1128 12:15:46.559062 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdxfk\" (UniqueName: \"kubernetes.io/projected/d2607d43-f222-4643-a033-0e170f4fec9a-kube-api-access-jdxfk\") pod \"d2607d43-f222-4643-a033-0e170f4fec9a\" (UID: \"d2607d43-f222-4643-a033-0e170f4fec9a\") " Nov 28 12:15:46 crc kubenswrapper[5030]: I1128 12:15:46.559191 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2607d43-f222-4643-a033-0e170f4fec9a-operator-scripts\") pod \"d2607d43-f222-4643-a033-0e170f4fec9a\" (UID: \"d2607d43-f222-4643-a033-0e170f4fec9a\") " Nov 28 12:15:46 crc kubenswrapper[5030]: I1128 12:15:46.560133 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2607d43-f222-4643-a033-0e170f4fec9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d2607d43-f222-4643-a033-0e170f4fec9a" (UID: "d2607d43-f222-4643-a033-0e170f4fec9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:15:46 crc kubenswrapper[5030]: I1128 12:15:46.567537 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2607d43-f222-4643-a033-0e170f4fec9a-kube-api-access-jdxfk" (OuterVolumeSpecName: "kube-api-access-jdxfk") pod "d2607d43-f222-4643-a033-0e170f4fec9a" (UID: "d2607d43-f222-4643-a033-0e170f4fec9a"). InnerVolumeSpecName "kube-api-access-jdxfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:15:46 crc kubenswrapper[5030]: I1128 12:15:46.661323 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdxfk\" (UniqueName: \"kubernetes.io/projected/d2607d43-f222-4643-a033-0e170f4fec9a-kube-api-access-jdxfk\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:46 crc kubenswrapper[5030]: I1128 12:15:46.661367 5030 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2607d43-f222-4643-a033-0e170f4fec9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:47 crc kubenswrapper[5030]: I1128 12:15:47.040131 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glanced958-account-delete-mscnw" event={"ID":"d2607d43-f222-4643-a033-0e170f4fec9a","Type":"ContainerDied","Data":"9286b3b865021df3e8c7051fd39b37cfa363e37354b4e41d2414617d7401bca0"} Nov 28 12:15:47 crc kubenswrapper[5030]: I1128 12:15:47.040173 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9286b3b865021df3e8c7051fd39b37cfa363e37354b4e41d2414617d7401bca0" Nov 28 12:15:47 crc kubenswrapper[5030]: I1128 12:15:47.040227 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glanced958-account-delete-mscnw" Nov 28 12:15:48 crc kubenswrapper[5030]: I1128 12:15:48.203259 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-mrjj6"] Nov 28 12:15:48 crc kubenswrapper[5030]: I1128 12:15:48.215387 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-mrjj6"] Nov 28 12:15:48 crc kubenswrapper[5030]: I1128 12:15:48.225926 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glanced958-account-delete-mscnw"] Nov 28 12:15:48 crc kubenswrapper[5030]: I1128 12:15:48.232108 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-d958-account-create-update-f7jjt"] Nov 28 12:15:48 crc kubenswrapper[5030]: I1128 12:15:48.259212 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-d958-account-create-update-f7jjt"] Nov 28 12:15:48 crc kubenswrapper[5030]: I1128 12:15:48.273742 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glanced958-account-delete-mscnw"] Nov 28 12:15:48 crc kubenswrapper[5030]: I1128 12:15:48.408100 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="181c230a-8790-4356-9629-926496d09c14" path="/var/lib/kubelet/pods/181c230a-8790-4356-9629-926496d09c14/volumes" Nov 28 12:15:48 crc kubenswrapper[5030]: I1128 12:15:48.409268 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="491c27aa-ae62-413f-805d-7a9c200f53eb" path="/var/lib/kubelet/pods/491c27aa-ae62-413f-805d-7a9c200f53eb/volumes" Nov 28 12:15:48 crc kubenswrapper[5030]: I1128 12:15:48.410284 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2607d43-f222-4643-a033-0e170f4fec9a" path="/var/lib/kubelet/pods/d2607d43-f222-4643-a033-0e170f4fec9a/volumes" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.308370 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-5p78d"] Nov 28 12:15:49 crc kubenswrapper[5030]: E1128 12:15:49.309378 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2607d43-f222-4643-a033-0e170f4fec9a" containerName="mariadb-account-delete" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.309398 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2607d43-f222-4643-a033-0e170f4fec9a" containerName="mariadb-account-delete" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.309616 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2607d43-f222-4643-a033-0e170f4fec9a" containerName="mariadb-account-delete" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.310238 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-5p78d" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.317871 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-82b7-account-create-update-xkcqd"] Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.319174 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-82b7-account-create-update-xkcqd" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.321583 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.325688 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-5p78d"] Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.338199 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-82b7-account-create-update-xkcqd"] Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.510338 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98r5p\" (UniqueName: \"kubernetes.io/projected/58926cd1-1db9-4ad5-a1fd-4f13e28eec20-kube-api-access-98r5p\") pod \"glance-82b7-account-create-update-xkcqd\" (UID: \"58926cd1-1db9-4ad5-a1fd-4f13e28eec20\") " pod="glance-kuttl-tests/glance-82b7-account-create-update-xkcqd" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.510870 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58926cd1-1db9-4ad5-a1fd-4f13e28eec20-operator-scripts\") pod \"glance-82b7-account-create-update-xkcqd\" (UID: \"58926cd1-1db9-4ad5-a1fd-4f13e28eec20\") " pod="glance-kuttl-tests/glance-82b7-account-create-update-xkcqd" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.510970 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkfkr\" (UniqueName: \"kubernetes.io/projected/87ddeb74-27df-42cc-aadc-c7d68c79f0c4-kube-api-access-vkfkr\") pod \"glance-db-create-5p78d\" (UID: \"87ddeb74-27df-42cc-aadc-c7d68c79f0c4\") " pod="glance-kuttl-tests/glance-db-create-5p78d" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.511103 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87ddeb74-27df-42cc-aadc-c7d68c79f0c4-operator-scripts\") pod \"glance-db-create-5p78d\" (UID: \"87ddeb74-27df-42cc-aadc-c7d68c79f0c4\") " pod="glance-kuttl-tests/glance-db-create-5p78d" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.612652 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkfkr\" (UniqueName: \"kubernetes.io/projected/87ddeb74-27df-42cc-aadc-c7d68c79f0c4-kube-api-access-vkfkr\") pod \"glance-db-create-5p78d\" (UID: \"87ddeb74-27df-42cc-aadc-c7d68c79f0c4\") " pod="glance-kuttl-tests/glance-db-create-5p78d" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.612783 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87ddeb74-27df-42cc-aadc-c7d68c79f0c4-operator-scripts\") pod \"glance-db-create-5p78d\" (UID: \"87ddeb74-27df-42cc-aadc-c7d68c79f0c4\") " pod="glance-kuttl-tests/glance-db-create-5p78d" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.612847 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98r5p\" (UniqueName: \"kubernetes.io/projected/58926cd1-1db9-4ad5-a1fd-4f13e28eec20-kube-api-access-98r5p\") pod \"glance-82b7-account-create-update-xkcqd\" (UID: \"58926cd1-1db9-4ad5-a1fd-4f13e28eec20\") " pod="glance-kuttl-tests/glance-82b7-account-create-update-xkcqd" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.612989 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58926cd1-1db9-4ad5-a1fd-4f13e28eec20-operator-scripts\") pod \"glance-82b7-account-create-update-xkcqd\" (UID: \"58926cd1-1db9-4ad5-a1fd-4f13e28eec20\") " pod="glance-kuttl-tests/glance-82b7-account-create-update-xkcqd" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.614180 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87ddeb74-27df-42cc-aadc-c7d68c79f0c4-operator-scripts\") pod \"glance-db-create-5p78d\" (UID: \"87ddeb74-27df-42cc-aadc-c7d68c79f0c4\") " pod="glance-kuttl-tests/glance-db-create-5p78d" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.614504 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58926cd1-1db9-4ad5-a1fd-4f13e28eec20-operator-scripts\") pod \"glance-82b7-account-create-update-xkcqd\" (UID: \"58926cd1-1db9-4ad5-a1fd-4f13e28eec20\") " pod="glance-kuttl-tests/glance-82b7-account-create-update-xkcqd" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.643191 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98r5p\" (UniqueName: \"kubernetes.io/projected/58926cd1-1db9-4ad5-a1fd-4f13e28eec20-kube-api-access-98r5p\") pod \"glance-82b7-account-create-update-xkcqd\" (UID: \"58926cd1-1db9-4ad5-a1fd-4f13e28eec20\") " pod="glance-kuttl-tests/glance-82b7-account-create-update-xkcqd" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.643210 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkfkr\" (UniqueName: \"kubernetes.io/projected/87ddeb74-27df-42cc-aadc-c7d68c79f0c4-kube-api-access-vkfkr\") pod \"glance-db-create-5p78d\" (UID: \"87ddeb74-27df-42cc-aadc-c7d68c79f0c4\") " pod="glance-kuttl-tests/glance-db-create-5p78d" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.643791 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-5p78d" Nov 28 12:15:49 crc kubenswrapper[5030]: I1128 12:15:49.652155 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-82b7-account-create-update-xkcqd" Nov 28 12:15:50 crc kubenswrapper[5030]: I1128 12:15:50.138808 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-5p78d"] Nov 28 12:15:50 crc kubenswrapper[5030]: I1128 12:15:50.228644 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-82b7-account-create-update-xkcqd"] Nov 28 12:15:50 crc kubenswrapper[5030]: W1128 12:15:50.236443 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58926cd1_1db9_4ad5_a1fd_4f13e28eec20.slice/crio-8d91b79262c66aaebba42e8e07b6576780105e654ec470013665d0735256684e WatchSource:0}: Error finding container 8d91b79262c66aaebba42e8e07b6576780105e654ec470013665d0735256684e: Status 404 returned error can't find the container with id 8d91b79262c66aaebba42e8e07b6576780105e654ec470013665d0735256684e Nov 28 12:15:51 crc kubenswrapper[5030]: I1128 12:15:51.090622 5030 generic.go:334] "Generic (PLEG): container finished" podID="87ddeb74-27df-42cc-aadc-c7d68c79f0c4" containerID="00a9daa56c4d2d280c0b4d53d6556758e5e2c86a07f6e63669152359f335196c" exitCode=0 Nov 28 12:15:51 crc kubenswrapper[5030]: I1128 12:15:51.091271 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-5p78d" event={"ID":"87ddeb74-27df-42cc-aadc-c7d68c79f0c4","Type":"ContainerDied","Data":"00a9daa56c4d2d280c0b4d53d6556758e5e2c86a07f6e63669152359f335196c"} Nov 28 12:15:51 crc kubenswrapper[5030]: I1128 12:15:51.091328 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-5p78d" event={"ID":"87ddeb74-27df-42cc-aadc-c7d68c79f0c4","Type":"ContainerStarted","Data":"be50f5a6f74c611009feeda03e9d2d0512ce36b76a6dadaba6c1a9d7313f41fb"} Nov 28 12:15:51 crc kubenswrapper[5030]: I1128 12:15:51.098332 5030 generic.go:334] "Generic (PLEG): container finished" podID="58926cd1-1db9-4ad5-a1fd-4f13e28eec20" containerID="5c761ed53baee30ed2a9ccc0d5fe42622ed5103de16fbf0270ee8618ebed7342" exitCode=0 Nov 28 12:15:51 crc kubenswrapper[5030]: I1128 12:15:51.098444 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-82b7-account-create-update-xkcqd" event={"ID":"58926cd1-1db9-4ad5-a1fd-4f13e28eec20","Type":"ContainerDied","Data":"5c761ed53baee30ed2a9ccc0d5fe42622ed5103de16fbf0270ee8618ebed7342"} Nov 28 12:15:51 crc kubenswrapper[5030]: I1128 12:15:51.098528 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-82b7-account-create-update-xkcqd" event={"ID":"58926cd1-1db9-4ad5-a1fd-4f13e28eec20","Type":"ContainerStarted","Data":"8d91b79262c66aaebba42e8e07b6576780105e654ec470013665d0735256684e"} Nov 28 12:15:52 crc kubenswrapper[5030]: I1128 12:15:52.609859 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-5p78d" Nov 28 12:15:52 crc kubenswrapper[5030]: I1128 12:15:52.612978 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-82b7-account-create-update-xkcqd" Nov 28 12:15:52 crc kubenswrapper[5030]: I1128 12:15:52.778641 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkfkr\" (UniqueName: \"kubernetes.io/projected/87ddeb74-27df-42cc-aadc-c7d68c79f0c4-kube-api-access-vkfkr\") pod \"87ddeb74-27df-42cc-aadc-c7d68c79f0c4\" (UID: \"87ddeb74-27df-42cc-aadc-c7d68c79f0c4\") " Nov 28 12:15:52 crc kubenswrapper[5030]: I1128 12:15:52.778702 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58926cd1-1db9-4ad5-a1fd-4f13e28eec20-operator-scripts\") pod \"58926cd1-1db9-4ad5-a1fd-4f13e28eec20\" (UID: \"58926cd1-1db9-4ad5-a1fd-4f13e28eec20\") " Nov 28 12:15:52 crc kubenswrapper[5030]: I1128 12:15:52.778743 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98r5p\" (UniqueName: \"kubernetes.io/projected/58926cd1-1db9-4ad5-a1fd-4f13e28eec20-kube-api-access-98r5p\") pod \"58926cd1-1db9-4ad5-a1fd-4f13e28eec20\" (UID: \"58926cd1-1db9-4ad5-a1fd-4f13e28eec20\") " Nov 28 12:15:52 crc kubenswrapper[5030]: I1128 12:15:52.778803 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87ddeb74-27df-42cc-aadc-c7d68c79f0c4-operator-scripts\") pod \"87ddeb74-27df-42cc-aadc-c7d68c79f0c4\" (UID: \"87ddeb74-27df-42cc-aadc-c7d68c79f0c4\") " Nov 28 12:15:52 crc kubenswrapper[5030]: I1128 12:15:52.779855 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87ddeb74-27df-42cc-aadc-c7d68c79f0c4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87ddeb74-27df-42cc-aadc-c7d68c79f0c4" (UID: "87ddeb74-27df-42cc-aadc-c7d68c79f0c4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:15:52 crc kubenswrapper[5030]: I1128 12:15:52.780205 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58926cd1-1db9-4ad5-a1fd-4f13e28eec20-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58926cd1-1db9-4ad5-a1fd-4f13e28eec20" (UID: "58926cd1-1db9-4ad5-a1fd-4f13e28eec20"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:15:52 crc kubenswrapper[5030]: I1128 12:15:52.786279 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87ddeb74-27df-42cc-aadc-c7d68c79f0c4-kube-api-access-vkfkr" (OuterVolumeSpecName: "kube-api-access-vkfkr") pod "87ddeb74-27df-42cc-aadc-c7d68c79f0c4" (UID: "87ddeb74-27df-42cc-aadc-c7d68c79f0c4"). InnerVolumeSpecName "kube-api-access-vkfkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:15:52 crc kubenswrapper[5030]: I1128 12:15:52.786678 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58926cd1-1db9-4ad5-a1fd-4f13e28eec20-kube-api-access-98r5p" (OuterVolumeSpecName: "kube-api-access-98r5p") pod "58926cd1-1db9-4ad5-a1fd-4f13e28eec20" (UID: "58926cd1-1db9-4ad5-a1fd-4f13e28eec20"). InnerVolumeSpecName "kube-api-access-98r5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:15:52 crc kubenswrapper[5030]: I1128 12:15:52.880735 5030 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58926cd1-1db9-4ad5-a1fd-4f13e28eec20-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:52 crc kubenswrapper[5030]: I1128 12:15:52.880786 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98r5p\" (UniqueName: \"kubernetes.io/projected/58926cd1-1db9-4ad5-a1fd-4f13e28eec20-kube-api-access-98r5p\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:52 crc kubenswrapper[5030]: I1128 12:15:52.880803 5030 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87ddeb74-27df-42cc-aadc-c7d68c79f0c4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:52 crc kubenswrapper[5030]: I1128 12:15:52.880816 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkfkr\" (UniqueName: \"kubernetes.io/projected/87ddeb74-27df-42cc-aadc-c7d68c79f0c4-kube-api-access-vkfkr\") on node \"crc\" DevicePath \"\"" Nov 28 12:15:53 crc kubenswrapper[5030]: I1128 12:15:53.123987 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-82b7-account-create-update-xkcqd" Nov 28 12:15:53 crc kubenswrapper[5030]: I1128 12:15:53.124091 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-82b7-account-create-update-xkcqd" event={"ID":"58926cd1-1db9-4ad5-a1fd-4f13e28eec20","Type":"ContainerDied","Data":"8d91b79262c66aaebba42e8e07b6576780105e654ec470013665d0735256684e"} Nov 28 12:15:53 crc kubenswrapper[5030]: I1128 12:15:53.124180 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d91b79262c66aaebba42e8e07b6576780105e654ec470013665d0735256684e" Nov 28 12:15:53 crc kubenswrapper[5030]: I1128 12:15:53.128941 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-5p78d" event={"ID":"87ddeb74-27df-42cc-aadc-c7d68c79f0c4","Type":"ContainerDied","Data":"be50f5a6f74c611009feeda03e9d2d0512ce36b76a6dadaba6c1a9d7313f41fb"} Nov 28 12:15:53 crc kubenswrapper[5030]: I1128 12:15:53.129018 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be50f5a6f74c611009feeda03e9d2d0512ce36b76a6dadaba6c1a9d7313f41fb" Nov 28 12:15:53 crc kubenswrapper[5030]: I1128 12:15:53.129132 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-5p78d" Nov 28 12:15:54 crc kubenswrapper[5030]: I1128 12:15:54.450331 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-44kdg"] Nov 28 12:15:54 crc kubenswrapper[5030]: E1128 12:15:54.450724 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87ddeb74-27df-42cc-aadc-c7d68c79f0c4" containerName="mariadb-database-create" Nov 28 12:15:54 crc kubenswrapper[5030]: I1128 12:15:54.450739 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="87ddeb74-27df-42cc-aadc-c7d68c79f0c4" containerName="mariadb-database-create" Nov 28 12:15:54 crc kubenswrapper[5030]: E1128 12:15:54.450751 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58926cd1-1db9-4ad5-a1fd-4f13e28eec20" containerName="mariadb-account-create-update" Nov 28 12:15:54 crc kubenswrapper[5030]: I1128 12:15:54.450757 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="58926cd1-1db9-4ad5-a1fd-4f13e28eec20" containerName="mariadb-account-create-update" Nov 28 12:15:54 crc kubenswrapper[5030]: I1128 12:15:54.450896 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="87ddeb74-27df-42cc-aadc-c7d68c79f0c4" containerName="mariadb-database-create" Nov 28 12:15:54 crc kubenswrapper[5030]: I1128 12:15:54.450916 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="58926cd1-1db9-4ad5-a1fd-4f13e28eec20" containerName="mariadb-account-create-update" Nov 28 12:15:54 crc kubenswrapper[5030]: I1128 12:15:54.451371 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-44kdg" Nov 28 12:15:54 crc kubenswrapper[5030]: I1128 12:15:54.453750 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-pdnjz" Nov 28 12:15:54 crc kubenswrapper[5030]: I1128 12:15:54.453751 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Nov 28 12:15:54 crc kubenswrapper[5030]: I1128 12:15:54.465382 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-44kdg"] Nov 28 12:15:54 crc kubenswrapper[5030]: I1128 12:15:54.610450 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hc26\" (UniqueName: \"kubernetes.io/projected/46e15600-df1f-4328-b78a-938c6d7789fc-kube-api-access-4hc26\") pod \"glance-db-sync-44kdg\" (UID: \"46e15600-df1f-4328-b78a-938c6d7789fc\") " pod="glance-kuttl-tests/glance-db-sync-44kdg" Nov 28 12:15:54 crc kubenswrapper[5030]: I1128 12:15:54.611212 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e15600-df1f-4328-b78a-938c6d7789fc-config-data\") pod \"glance-db-sync-44kdg\" (UID: \"46e15600-df1f-4328-b78a-938c6d7789fc\") " pod="glance-kuttl-tests/glance-db-sync-44kdg" Nov 28 12:15:54 crc kubenswrapper[5030]: I1128 12:15:54.611261 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/46e15600-df1f-4328-b78a-938c6d7789fc-db-sync-config-data\") pod \"glance-db-sync-44kdg\" (UID: \"46e15600-df1f-4328-b78a-938c6d7789fc\") " pod="glance-kuttl-tests/glance-db-sync-44kdg" Nov 28 12:15:54 crc kubenswrapper[5030]: I1128 12:15:54.713656 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e15600-df1f-4328-b78a-938c6d7789fc-config-data\") pod \"glance-db-sync-44kdg\" (UID: \"46e15600-df1f-4328-b78a-938c6d7789fc\") " pod="glance-kuttl-tests/glance-db-sync-44kdg" Nov 28 12:15:54 crc kubenswrapper[5030]: I1128 12:15:54.713733 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/46e15600-df1f-4328-b78a-938c6d7789fc-db-sync-config-data\") pod \"glance-db-sync-44kdg\" (UID: \"46e15600-df1f-4328-b78a-938c6d7789fc\") " pod="glance-kuttl-tests/glance-db-sync-44kdg" Nov 28 12:15:54 crc kubenswrapper[5030]: I1128 12:15:54.713954 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hc26\" (UniqueName: \"kubernetes.io/projected/46e15600-df1f-4328-b78a-938c6d7789fc-kube-api-access-4hc26\") pod \"glance-db-sync-44kdg\" (UID: \"46e15600-df1f-4328-b78a-938c6d7789fc\") " pod="glance-kuttl-tests/glance-db-sync-44kdg" Nov 28 12:15:54 crc kubenswrapper[5030]: I1128 12:15:54.719402 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e15600-df1f-4328-b78a-938c6d7789fc-config-data\") pod \"glance-db-sync-44kdg\" (UID: \"46e15600-df1f-4328-b78a-938c6d7789fc\") " pod="glance-kuttl-tests/glance-db-sync-44kdg" Nov 28 12:15:54 crc kubenswrapper[5030]: I1128 12:15:54.731678 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/46e15600-df1f-4328-b78a-938c6d7789fc-db-sync-config-data\") pod \"glance-db-sync-44kdg\" (UID: \"46e15600-df1f-4328-b78a-938c6d7789fc\") " pod="glance-kuttl-tests/glance-db-sync-44kdg" Nov 28 12:15:54 crc kubenswrapper[5030]: I1128 12:15:54.754439 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hc26\" (UniqueName: \"kubernetes.io/projected/46e15600-df1f-4328-b78a-938c6d7789fc-kube-api-access-4hc26\") pod \"glance-db-sync-44kdg\" (UID: \"46e15600-df1f-4328-b78a-938c6d7789fc\") " pod="glance-kuttl-tests/glance-db-sync-44kdg" Nov 28 12:15:54 crc kubenswrapper[5030]: I1128 12:15:54.765592 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-44kdg" Nov 28 12:15:55 crc kubenswrapper[5030]: I1128 12:15:55.273317 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-44kdg"] Nov 28 12:15:55 crc kubenswrapper[5030]: W1128 12:15:55.276334 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46e15600_df1f_4328_b78a_938c6d7789fc.slice/crio-5e62145a2b723b052d668d2844831bed3c6cff40bed452104d16ca6542c0de4b WatchSource:0}: Error finding container 5e62145a2b723b052d668d2844831bed3c6cff40bed452104d16ca6542c0de4b: Status 404 returned error can't find the container with id 5e62145a2b723b052d668d2844831bed3c6cff40bed452104d16ca6542c0de4b Nov 28 12:15:56 crc kubenswrapper[5030]: I1128 12:15:56.155821 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-44kdg" event={"ID":"46e15600-df1f-4328-b78a-938c6d7789fc","Type":"ContainerStarted","Data":"850a06ebdab719c534f763269f68d03b510d82ebfc12392ed05b0571ffb716f2"} Nov 28 12:15:56 crc kubenswrapper[5030]: I1128 12:15:56.156436 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-44kdg" event={"ID":"46e15600-df1f-4328-b78a-938c6d7789fc","Type":"ContainerStarted","Data":"5e62145a2b723b052d668d2844831bed3c6cff40bed452104d16ca6542c0de4b"} Nov 28 12:15:56 crc kubenswrapper[5030]: I1128 12:15:56.172393 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-44kdg" podStartSLOduration=2.172372677 podStartE2EDuration="2.172372677s" podCreationTimestamp="2025-11-28 12:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:15:56.171575976 +0000 UTC m=+1374.113318679" watchObservedRunningTime="2025-11-28 12:15:56.172372677 +0000 UTC m=+1374.114115360" Nov 28 12:15:59 crc kubenswrapper[5030]: I1128 12:15:59.246731 5030 generic.go:334] "Generic (PLEG): container finished" podID="46e15600-df1f-4328-b78a-938c6d7789fc" containerID="850a06ebdab719c534f763269f68d03b510d82ebfc12392ed05b0571ffb716f2" exitCode=0 Nov 28 12:15:59 crc kubenswrapper[5030]: I1128 12:15:59.246978 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-44kdg" event={"ID":"46e15600-df1f-4328-b78a-938c6d7789fc","Type":"ContainerDied","Data":"850a06ebdab719c534f763269f68d03b510d82ebfc12392ed05b0571ffb716f2"} Nov 28 12:16:00 crc kubenswrapper[5030]: I1128 12:16:00.614052 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-44kdg" Nov 28 12:16:00 crc kubenswrapper[5030]: I1128 12:16:00.743150 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/46e15600-df1f-4328-b78a-938c6d7789fc-db-sync-config-data\") pod \"46e15600-df1f-4328-b78a-938c6d7789fc\" (UID: \"46e15600-df1f-4328-b78a-938c6d7789fc\") " Nov 28 12:16:00 crc kubenswrapper[5030]: I1128 12:16:00.743230 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hc26\" (UniqueName: \"kubernetes.io/projected/46e15600-df1f-4328-b78a-938c6d7789fc-kube-api-access-4hc26\") pod \"46e15600-df1f-4328-b78a-938c6d7789fc\" (UID: \"46e15600-df1f-4328-b78a-938c6d7789fc\") " Nov 28 12:16:00 crc kubenswrapper[5030]: I1128 12:16:00.743333 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e15600-df1f-4328-b78a-938c6d7789fc-config-data\") pod \"46e15600-df1f-4328-b78a-938c6d7789fc\" (UID: \"46e15600-df1f-4328-b78a-938c6d7789fc\") " Nov 28 12:16:00 crc kubenswrapper[5030]: I1128 12:16:00.751717 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46e15600-df1f-4328-b78a-938c6d7789fc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "46e15600-df1f-4328-b78a-938c6d7789fc" (UID: "46e15600-df1f-4328-b78a-938c6d7789fc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:16:00 crc kubenswrapper[5030]: I1128 12:16:00.752038 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46e15600-df1f-4328-b78a-938c6d7789fc-kube-api-access-4hc26" (OuterVolumeSpecName: "kube-api-access-4hc26") pod "46e15600-df1f-4328-b78a-938c6d7789fc" (UID: "46e15600-df1f-4328-b78a-938c6d7789fc"). InnerVolumeSpecName "kube-api-access-4hc26". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:16:00 crc kubenswrapper[5030]: I1128 12:16:00.797970 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46e15600-df1f-4328-b78a-938c6d7789fc-config-data" (OuterVolumeSpecName: "config-data") pod "46e15600-df1f-4328-b78a-938c6d7789fc" (UID: "46e15600-df1f-4328-b78a-938c6d7789fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:16:00 crc kubenswrapper[5030]: I1128 12:16:00.845236 5030 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/46e15600-df1f-4328-b78a-938c6d7789fc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:00 crc kubenswrapper[5030]: I1128 12:16:00.845278 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hc26\" (UniqueName: \"kubernetes.io/projected/46e15600-df1f-4328-b78a-938c6d7789fc-kube-api-access-4hc26\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:00 crc kubenswrapper[5030]: I1128 12:16:00.845291 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e15600-df1f-4328-b78a-938c6d7789fc-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:01 crc kubenswrapper[5030]: I1128 12:16:01.267077 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-44kdg" event={"ID":"46e15600-df1f-4328-b78a-938c6d7789fc","Type":"ContainerDied","Data":"5e62145a2b723b052d668d2844831bed3c6cff40bed452104d16ca6542c0de4b"} Nov 28 12:16:01 crc kubenswrapper[5030]: I1128 12:16:01.267129 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e62145a2b723b052d668d2844831bed3c6cff40bed452104d16ca6542c0de4b" Nov 28 12:16:01 crc kubenswrapper[5030]: I1128 12:16:01.267268 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-44kdg" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.790458 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:16:02 crc kubenswrapper[5030]: E1128 12:16:02.791192 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e15600-df1f-4328-b78a-938c6d7789fc" containerName="glance-db-sync" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.791207 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e15600-df1f-4328-b78a-938c6d7789fc" containerName="glance-db-sync" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.791380 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="46e15600-df1f-4328-b78a-938c6d7789fc" containerName="glance-db-sync" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.792282 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.795285 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.795492 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-external-config-data" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.796669 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-pdnjz" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.806987 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.820272 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.821665 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: W1128 12:16:02.830365 5030 reflector.go:561] object-"glance-kuttl-tests"/"glance-default-internal-config-data": failed to list *v1.Secret: secrets "glance-default-internal-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "glance-kuttl-tests": no relationship found between node 'crc' and this object Nov 28 12:16:02 crc kubenswrapper[5030]: E1128 12:16:02.830413 5030 reflector.go:158] "Unhandled Error" err="object-\"glance-kuttl-tests\"/\"glance-default-internal-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"glance-default-internal-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"glance-kuttl-tests\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.839760 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.984486 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-sys\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.984547 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.984573 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpgvc\" (UniqueName: \"kubernetes.io/projected/fcc16ff7-97d5-4a61-a722-98fb7c811637-kube-api-access-dpgvc\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.984599 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc16ff7-97d5-4a61-a722-98fb7c811637-scripts\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.984622 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.984649 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc16ff7-97d5-4a61-a722-98fb7c811637-logs\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.984668 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.984687 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.984704 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-sys\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.984961 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f59ld\" (UniqueName: \"kubernetes.io/projected/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-kube-api-access-f59ld\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.985091 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc16ff7-97d5-4a61-a722-98fb7c811637-config-data\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.985130 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.985161 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fcc16ff7-97d5-4a61-a722-98fb7c811637-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.985182 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-logs\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.985333 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.985373 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.985407 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.985433 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-run\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.985448 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-dev\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.985515 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.985534 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.985568 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-dev\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.985586 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.985605 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.985627 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-run\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.985644 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.985660 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:02 crc kubenswrapper[5030]: I1128 12:16:02.985679 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.086829 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.086871 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.086898 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-dev\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.086920 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.086989 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-dev\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.087002 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.087102 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.086940 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.087194 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.087214 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-run\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.087272 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.087299 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-run\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.087320 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.087341 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.087383 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-sys\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.087411 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.087536 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-sys\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.087775 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.087827 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") device mount path \"/mnt/openstack/pv02\"" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.087903 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.087954 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpgvc\" (UniqueName: \"kubernetes.io/projected/fcc16ff7-97d5-4a61-a722-98fb7c811637-kube-api-access-dpgvc\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.087977 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc16ff7-97d5-4a61-a722-98fb7c811637-scripts\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088318 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088345 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc16ff7-97d5-4a61-a722-98fb7c811637-logs\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088389 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088417 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088499 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088522 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-sys\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088547 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f59ld\" (UniqueName: \"kubernetes.io/projected/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-kube-api-access-f59ld\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088581 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc16ff7-97d5-4a61-a722-98fb7c811637-config-data\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088605 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088623 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fcc16ff7-97d5-4a61-a722-98fb7c811637-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088639 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-logs\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088657 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088675 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088703 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088723 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-run\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088742 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-dev\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088755 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088819 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-sys\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088934 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.089103 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.088801 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-dev\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.089390 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fcc16ff7-97d5-4a61-a722-98fb7c811637-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.089424 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.089670 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-logs\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.089680 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") device mount path \"/mnt/openstack/pv18\"" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.089708 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-run\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.089770 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.096528 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc16ff7-97d5-4a61-a722-98fb7c811637-scripts\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.096974 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc16ff7-97d5-4a61-a722-98fb7c811637-config-data\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.097645 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.109420 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc16ff7-97d5-4a61-a722-98fb7c811637-logs\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.122659 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.126981 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpgvc\" (UniqueName: \"kubernetes.io/projected/fcc16ff7-97d5-4a61-a722-98fb7c811637-kube-api-access-dpgvc\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.130070 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.131585 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.139172 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f59ld\" (UniqueName: \"kubernetes.io/projected/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-kube-api-access-f59ld\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.139670 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-external-api-0\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.413766 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.682708 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:16:03 crc kubenswrapper[5030]: E1128 12:16:03.684429 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config-data], unattached volumes=[], failed to process volumes=[]: context canceled" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="5ea7488b-cd8c-412e-a2d7-5af4ffc9705b" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.750649 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.755119 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Nov 28 12:16:03 crc kubenswrapper[5030]: I1128 12:16:03.765262 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.297932 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.297942 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"fcc16ff7-97d5-4a61-a722-98fb7c811637","Type":"ContainerStarted","Data":"e69c398366bd5ca73892124a98ac0718e94dc6efdddbba8d60ae7441a1acdb97"} Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.299044 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"fcc16ff7-97d5-4a61-a722-98fb7c811637","Type":"ContainerStarted","Data":"cbaf299784251a7af62a131e81d1b365a8987c4fa4a16a165b86cb52d7563513"} Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.299073 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"fcc16ff7-97d5-4a61-a722-98fb7c811637","Type":"ContainerStarted","Data":"9c44d055e03d1a39b48097825148878f4336a055dc81432b0076cd1bf44a8f50"} Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.320844 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.342627 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-0" podStartSLOduration=2.342602436 podStartE2EDuration="2.342602436s" podCreationTimestamp="2025-11-28 12:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:16:04.339078461 +0000 UTC m=+1382.280821164" watchObservedRunningTime="2025-11-28 12:16:04.342602436 +0000 UTC m=+1382.284345109" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.411517 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-config-data\") pod \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.411594 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-etc-iscsi\") pod \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.411644 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-httpd-run\") pod \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.411669 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-var-locks-brick\") pod \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.411765 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-run\") pod \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.411788 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b" (UID: "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.411855 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b" (UID: "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.411911 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.411973 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-dev\") pod \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.411995 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b" (UID: "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.412009 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-run" (OuterVolumeSpecName: "run") pod "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b" (UID: "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.412050 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-dev" (OuterVolumeSpecName: "dev") pod "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b" (UID: "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.412039 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f59ld\" (UniqueName: \"kubernetes.io/projected/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-kube-api-access-f59ld\") pod \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.412208 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-etc-nvme\") pod \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.412250 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.412288 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b" (UID: "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.412346 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-scripts\") pod \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.412416 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-logs\") pod \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.412456 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-sys\") pod \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.412979 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-lib-modules\") pod \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\" (UID: \"5ea7488b-cd8c-412e-a2d7-5af4ffc9705b\") " Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.412594 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-sys" (OuterVolumeSpecName: "sys") pod "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b" (UID: "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.412902 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-logs" (OuterVolumeSpecName: "logs") pod "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b" (UID: "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.413042 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b" (UID: "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.414283 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.414326 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.414339 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.414352 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.414364 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.414376 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.414391 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.414403 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.414419 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.419265 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-scripts" (OuterVolumeSpecName: "scripts") pod "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b" (UID: "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.419393 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-kube-api-access-f59ld" (OuterVolumeSpecName: "kube-api-access-f59ld") pod "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b" (UID: "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b"). InnerVolumeSpecName "kube-api-access-f59ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.419423 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b" (UID: "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.420566 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance-cache") pod "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b" (UID: "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.435885 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-config-data" (OuterVolumeSpecName: "config-data") pod "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b" (UID: "5ea7488b-cd8c-412e-a2d7-5af4ffc9705b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.515815 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.515851 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.515887 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.515906 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f59ld\" (UniqueName: \"kubernetes.io/projected/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b-kube-api-access-f59ld\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.515926 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.535244 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.535271 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.617896 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:04 crc kubenswrapper[5030]: I1128 12:16:04.617960 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.310388 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.392019 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.398141 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.415621 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.417159 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.422661 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.427847 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.536016 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14d40f48-84b0-4e52-878c-941e9433eb63-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.536071 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d40f48-84b0-4e52-878c-941e9433eb63-scripts\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.536107 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.536131 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.536168 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d40f48-84b0-4e52-878c-941e9433eb63-config-data\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.536191 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-run\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.536213 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.536236 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.536252 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14d40f48-84b0-4e52-878c-941e9433eb63-logs\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.536271 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-sys\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.536287 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgqhd\" (UniqueName: \"kubernetes.io/projected/14d40f48-84b0-4e52-878c-941e9433eb63-kube-api-access-pgqhd\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.536326 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-dev\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.536347 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.536367 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.638386 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-sys\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.638442 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgqhd\" (UniqueName: \"kubernetes.io/projected/14d40f48-84b0-4e52-878c-941e9433eb63-kube-api-access-pgqhd\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.638510 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-dev\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.638525 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-sys\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.638539 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.638616 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-dev\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.638631 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.638705 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.638870 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14d40f48-84b0-4e52-878c-941e9433eb63-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.638901 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.638905 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d40f48-84b0-4e52-878c-941e9433eb63-scripts\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.639152 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.639208 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.639270 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d40f48-84b0-4e52-878c-941e9433eb63-config-data\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.639292 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.639317 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-run\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.639352 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.639388 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.639419 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14d40f48-84b0-4e52-878c-941e9433eb63-logs\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.639430 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-run\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.639534 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14d40f48-84b0-4e52-878c-941e9433eb63-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.639559 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.639603 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.639659 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.640113 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14d40f48-84b0-4e52-878c-941e9433eb63-logs\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.647066 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d40f48-84b0-4e52-878c-941e9433eb63-scripts\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.655180 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d40f48-84b0-4e52-878c-941e9433eb63-config-data\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.659554 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgqhd\" (UniqueName: \"kubernetes.io/projected/14d40f48-84b0-4e52-878c-941e9433eb63-kube-api-access-pgqhd\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.665548 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.672658 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:05 crc kubenswrapper[5030]: I1128 12:16:05.740263 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:06 crc kubenswrapper[5030]: I1128 12:16:06.076938 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:16:06 crc kubenswrapper[5030]: I1128 12:16:06.351649 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"14d40f48-84b0-4e52-878c-941e9433eb63","Type":"ContainerStarted","Data":"aba50d022e7b2857b4f69f39375e7451ba658171b73188f76378d6f322bdab43"} Nov 28 12:16:06 crc kubenswrapper[5030]: I1128 12:16:06.353535 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"14d40f48-84b0-4e52-878c-941e9433eb63","Type":"ContainerStarted","Data":"1e345c01ece0f06a76846695499fa27e604d024e6b4ad2de6182ddb849acc9fb"} Nov 28 12:16:06 crc kubenswrapper[5030]: I1128 12:16:06.401697 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ea7488b-cd8c-412e-a2d7-5af4ffc9705b" path="/var/lib/kubelet/pods/5ea7488b-cd8c-412e-a2d7-5af4ffc9705b/volumes" Nov 28 12:16:07 crc kubenswrapper[5030]: I1128 12:16:07.362385 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"14d40f48-84b0-4e52-878c-941e9433eb63","Type":"ContainerStarted","Data":"1c9a8692787215c9e6644031d6deb80f03cb45f5c4633d2808a33463220ea85a"} Nov 28 12:16:07 crc kubenswrapper[5030]: I1128 12:16:07.393157 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=2.393114664 podStartE2EDuration="2.393114664s" podCreationTimestamp="2025-11-28 12:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:16:07.388099868 +0000 UTC m=+1385.329842611" watchObservedRunningTime="2025-11-28 12:16:07.393114664 +0000 UTC m=+1385.334857367" Nov 28 12:16:13 crc kubenswrapper[5030]: I1128 12:16:13.414043 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:13 crc kubenswrapper[5030]: I1128 12:16:13.414801 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:13 crc kubenswrapper[5030]: I1128 12:16:13.454243 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:13 crc kubenswrapper[5030]: I1128 12:16:13.485042 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:14 crc kubenswrapper[5030]: I1128 12:16:14.429861 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:14 crc kubenswrapper[5030]: I1128 12:16:14.429944 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:15 crc kubenswrapper[5030]: I1128 12:16:15.741650 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:15 crc kubenswrapper[5030]: I1128 12:16:15.742269 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:15 crc kubenswrapper[5030]: I1128 12:16:15.787158 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:15 crc kubenswrapper[5030]: I1128 12:16:15.799743 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:16 crc kubenswrapper[5030]: I1128 12:16:16.428623 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:16 crc kubenswrapper[5030]: I1128 12:16:16.436543 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:16 crc kubenswrapper[5030]: I1128 12:16:16.449507 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:16 crc kubenswrapper[5030]: I1128 12:16:16.449591 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:18 crc kubenswrapper[5030]: I1128 12:16:18.340188 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:18 crc kubenswrapper[5030]: I1128 12:16:18.341194 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.667993 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.670962 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.698123 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.701362 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.701560 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.777376 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.854987 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b18a2f88-8006-4e0b-b55f-e4c873e90614-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.855044 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-dev\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.855072 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.855117 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b18a2f88-8006-4e0b-b55f-e4c873e90614-config-data\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.855313 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b18a2f88-8006-4e0b-b55f-e4c873e90614-scripts\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.855351 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkfbd\" (UniqueName: \"kubernetes.io/projected/b18a2f88-8006-4e0b-b55f-e4c873e90614-kube-api-access-vkfbd\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.855392 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.855438 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.855843 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-sys\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.855909 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b18a2f88-8006-4e0b-b55f-e4c873e90614-logs\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.855939 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.855974 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.856107 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-run\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.856230 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.875630 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.877230 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.887126 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.888848 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.900106 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.916713 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.958390 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.958438 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/059f6436-7e7e-4d3f-a114-43b7825b175e-scripts\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.958490 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.958524 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-run\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.958549 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.958573 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.958590 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b18a2f88-8006-4e0b-b55f-e4c873e90614-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.958611 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-dev\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.958627 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.958654 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b18a2f88-8006-4e0b-b55f-e4c873e90614-config-data\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.959627 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b18a2f88-8006-4e0b-b55f-e4c873e90614-scripts\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.960359 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.960438 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") device mount path \"/mnt/openstack/pv08\"" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.960489 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-run\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.960529 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.960578 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-dev\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.960729 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkfbd\" (UniqueName: \"kubernetes.io/projected/b18a2f88-8006-4e0b-b55f-e4c873e90614-kube-api-access-vkfbd\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.961048 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b18a2f88-8006-4e0b-b55f-e4c873e90614-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.961129 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.961156 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/059f6436-7e7e-4d3f-a114-43b7825b175e-config-data\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.961185 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx8wm\" (UniqueName: \"kubernetes.io/projected/059f6436-7e7e-4d3f-a114-43b7825b175e-kube-api-access-qx8wm\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.961208 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.961234 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.961253 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/059f6436-7e7e-4d3f-a114-43b7825b175e-logs\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.961307 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.962536 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.961624 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") device mount path \"/mnt/openstack/pv12\"" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.961556 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.962714 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-dev\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.962742 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/059f6436-7e7e-4d3f-a114-43b7825b175e-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.962775 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-sys\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.962796 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.962812 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-run\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.962832 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b18a2f88-8006-4e0b-b55f-e4c873e90614-logs\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.962891 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-sys\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.962929 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.963086 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b18a2f88-8006-4e0b-b55f-e4c873e90614-logs\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.963089 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.963122 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-sys\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.967427 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b18a2f88-8006-4e0b-b55f-e4c873e90614-scripts\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.967772 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b18a2f88-8006-4e0b-b55f-e4c873e90614-config-data\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.984804 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.990099 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkfbd\" (UniqueName: \"kubernetes.io/projected/b18a2f88-8006-4e0b-b55f-e4c873e90614-kube-api-access-vkfbd\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:20 crc kubenswrapper[5030]: I1128 12:16:20.993784 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-1\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.013217 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.064907 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/059f6436-7e7e-4d3f-a114-43b7825b175e-logs\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.065312 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.065502 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.065515 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/059f6436-7e7e-4d3f-a114-43b7825b175e-logs\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.065769 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.065905 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.066061 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-run\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.066225 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.066384 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-dev\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.066556 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/059f6436-7e7e-4d3f-a114-43b7825b175e-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.066700 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.066836 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.066979 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-run\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.067156 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-sys\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.067306 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.067610 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/059f6436-7e7e-4d3f-a114-43b7825b175e-scripts\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.067756 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.067959 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-config-data\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.068109 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.068242 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-run\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.068401 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-sys\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.068592 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-dev\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.068719 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-scripts\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.068952 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwvng\" (UniqueName: \"kubernetes.io/projected/35f838f4-cb87-481a-8265-02831a9749e1-kube-api-access-jwvng\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.069103 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.069256 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.069385 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.069601 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f838f4-cb87-481a-8265-02831a9749e1-scripts\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.069817 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.069953 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt9vl\" (UniqueName: \"kubernetes.io/projected/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-kube-api-access-mt9vl\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.070097 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.070222 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-dev\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.070350 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.070569 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.070712 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35f838f4-cb87-481a-8265-02831a9749e1-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.070851 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/059f6436-7e7e-4d3f-a114-43b7825b175e-config-data\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.071112 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35f838f4-cb87-481a-8265-02831a9749e1-logs\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.071268 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-sys\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.071406 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.071581 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx8wm\" (UniqueName: \"kubernetes.io/projected/059f6436-7e7e-4d3f-a114-43b7825b175e-kube-api-access-qx8wm\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.071710 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.071854 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f838f4-cb87-481a-8265-02831a9749e1-config-data\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.072075 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-logs\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.072226 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.072404 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.072709 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.066484 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-dev\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.073245 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") device mount path \"/mnt/openstack/pv09\"" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.079076 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-run\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.079144 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-sys\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.079243 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") device mount path \"/mnt/openstack/pv04\"" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.085872 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/059f6436-7e7e-4d3f-a114-43b7825b175e-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.086695 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.097415 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/059f6436-7e7e-4d3f-a114-43b7825b175e-scripts\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.108798 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/059f6436-7e7e-4d3f-a114-43b7825b175e-config-data\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.127562 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx8wm\" (UniqueName: \"kubernetes.io/projected/059f6436-7e7e-4d3f-a114-43b7825b175e-kube-api-access-qx8wm\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.128951 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.141918 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-2\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.174678 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.174765 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-config-data\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.174788 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.174807 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-run\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.174826 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-sys\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.174848 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-dev\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.174861 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-scripts\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.174888 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwvng\" (UniqueName: \"kubernetes.io/projected/35f838f4-cb87-481a-8265-02831a9749e1-kube-api-access-jwvng\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.174908 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.174926 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.174963 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f838f4-cb87-481a-8265-02831a9749e1-scripts\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.174989 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.175007 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt9vl\" (UniqueName: \"kubernetes.io/projected/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-kube-api-access-mt9vl\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.175029 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.175050 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-dev\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.175066 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.175100 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35f838f4-cb87-481a-8265-02831a9749e1-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.175115 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.175132 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35f838f4-cb87-481a-8265-02831a9749e1-logs\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.175148 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-sys\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.175165 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.175185 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f838f4-cb87-481a-8265-02831a9749e1-config-data\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.175201 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.175221 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-logs\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.175239 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.175267 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-run\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.175286 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.175305 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.175418 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.175552 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.176249 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") device mount path \"/mnt/openstack/pv14\"" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.178809 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-run\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.178821 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.179012 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-dev\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.179141 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.179588 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-sys\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.180142 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-logs\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.180216 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.180276 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.180586 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-run\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.180605 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.180735 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") device mount path \"/mnt/openstack/pv10\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.180776 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.180605 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35f838f4-cb87-481a-8265-02831a9749e1-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.180837 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-dev\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.180894 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.181070 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35f838f4-cb87-481a-8265-02831a9749e1-logs\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.181185 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.180741 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") device mount path \"/mnt/openstack/pv13\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.190525 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-config-data\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.201232 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-scripts\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.203357 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-sys\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.204603 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f838f4-cb87-481a-8265-02831a9749e1-config-data\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.217249 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f838f4-cb87-481a-8265-02831a9749e1-scripts\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.228570 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt9vl\" (UniqueName: \"kubernetes.io/projected/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-kube-api-access-mt9vl\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.236131 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwvng\" (UniqueName: \"kubernetes.io/projected/35f838f4-cb87-481a-8265-02831a9749e1-kube-api-access-jwvng\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.258173 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.269109 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.277692 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-2\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.291612 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.348078 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.506052 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.515087 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.568001 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 28 12:16:21 crc kubenswrapper[5030]: W1128 12:16:21.585026 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod059f6436_7e7e_4d3f_a114_43b7825b175e.slice/crio-e4221934761f7d98c420510910cb73b1224dac6090ce740bd5c9e14128ac4192 WatchSource:0}: Error finding container e4221934761f7d98c420510910cb73b1224dac6090ce740bd5c9e14128ac4192: Status 404 returned error can't find the container with id e4221934761f7d98c420510910cb73b1224dac6090ce740bd5c9e14128ac4192 Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.612277 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 28 12:16:21 crc kubenswrapper[5030]: W1128 12:16:21.630377 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb18a2f88_8006_4e0b_b55f_e4c873e90614.slice/crio-294832f5f1c48f0ee54238df57828ccdc5f6abccd7b31f7a1f954d83d0814b3e WatchSource:0}: Error finding container 294832f5f1c48f0ee54238df57828ccdc5f6abccd7b31f7a1f954d83d0814b3e: Status 404 returned error can't find the container with id 294832f5f1c48f0ee54238df57828ccdc5f6abccd7b31f7a1f954d83d0814b3e Nov 28 12:16:21 crc kubenswrapper[5030]: I1128 12:16:21.813320 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:16:22 crc kubenswrapper[5030]: I1128 12:16:22.132005 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 28 12:16:22 crc kubenswrapper[5030]: I1128 12:16:22.513019 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"35f838f4-cb87-481a-8265-02831a9749e1","Type":"ContainerStarted","Data":"436c837aaa455d00ad7832d0d3983190e74da414cfbc264ef7f4ca069b655226"} Nov 28 12:16:22 crc kubenswrapper[5030]: I1128 12:16:22.514127 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"35f838f4-cb87-481a-8265-02831a9749e1","Type":"ContainerStarted","Data":"5fe7ca0ca4a26f180e864bdb55a74afbd4c35d5e85a62cd00de34d8ebca93654"} Nov 28 12:16:22 crc kubenswrapper[5030]: I1128 12:16:22.514143 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"35f838f4-cb87-481a-8265-02831a9749e1","Type":"ContainerStarted","Data":"31af4b641d4c5d3f1208187c960651520861078515bf77c0a56d823bfb5d19a4"} Nov 28 12:16:22 crc kubenswrapper[5030]: I1128 12:16:22.515791 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"b18a2f88-8006-4e0b-b55f-e4c873e90614","Type":"ContainerStarted","Data":"a958895f9d19f80a46fb781ee4cf2ca5fa2dd010142cf4e34297c4f4b68d846f"} Nov 28 12:16:22 crc kubenswrapper[5030]: I1128 12:16:22.515886 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"b18a2f88-8006-4e0b-b55f-e4c873e90614","Type":"ContainerStarted","Data":"a16621272eb1fc723082b660ae01d6c1459cf41795d04e81e40e0e6d86a0e22f"} Nov 28 12:16:22 crc kubenswrapper[5030]: I1128 12:16:22.515960 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"b18a2f88-8006-4e0b-b55f-e4c873e90614","Type":"ContainerStarted","Data":"294832f5f1c48f0ee54238df57828ccdc5f6abccd7b31f7a1f954d83d0814b3e"} Nov 28 12:16:22 crc kubenswrapper[5030]: I1128 12:16:22.518670 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"059f6436-7e7e-4d3f-a114-43b7825b175e","Type":"ContainerStarted","Data":"65738f683417c13505563eea46bbaebc9f083b2e2936ee6b7dbbec45625189e7"} Nov 28 12:16:22 crc kubenswrapper[5030]: I1128 12:16:22.518799 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"059f6436-7e7e-4d3f-a114-43b7825b175e","Type":"ContainerStarted","Data":"4b3ba84c0e48979662e71a4372b7dea88fd512f6fb33862159d500aa78bda527"} Nov 28 12:16:22 crc kubenswrapper[5030]: I1128 12:16:22.518881 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"059f6436-7e7e-4d3f-a114-43b7825b175e","Type":"ContainerStarted","Data":"e4221934761f7d98c420510910cb73b1224dac6090ce740bd5c9e14128ac4192"} Nov 28 12:16:22 crc kubenswrapper[5030]: I1128 12:16:22.523157 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85","Type":"ContainerStarted","Data":"e3b18766410c94e7be5de841b96e21b9bc0d4fe28e8f14130881f3b0375ae806"} Nov 28 12:16:22 crc kubenswrapper[5030]: I1128 12:16:22.523222 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85","Type":"ContainerStarted","Data":"1cd33b1d4bade3a30ff1a0f37cb55c003d01cb3e1f3166a7aaaac7fac2554b61"} Nov 28 12:16:22 crc kubenswrapper[5030]: I1128 12:16:22.523236 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85","Type":"ContainerStarted","Data":"5548e75b8991550d8268e871ab004ef0f5420df77b38368a64861eeb97a3846d"} Nov 28 12:16:22 crc kubenswrapper[5030]: I1128 12:16:22.552902 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-1" podStartSLOduration=3.552882168 podStartE2EDuration="3.552882168s" podCreationTimestamp="2025-11-28 12:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:16:22.552147188 +0000 UTC m=+1400.493889871" watchObservedRunningTime="2025-11-28 12:16:22.552882168 +0000 UTC m=+1400.494624851" Nov 28 12:16:22 crc kubenswrapper[5030]: I1128 12:16:22.581858 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-2" podStartSLOduration=3.581836862 podStartE2EDuration="3.581836862s" podCreationTimestamp="2025-11-28 12:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:16:22.577341991 +0000 UTC m=+1400.519084694" watchObservedRunningTime="2025-11-28 12:16:22.581836862 +0000 UTC m=+1400.523579545" Nov 28 12:16:22 crc kubenswrapper[5030]: I1128 12:16:22.616659 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-1" podStartSLOduration=3.616631425 podStartE2EDuration="3.616631425s" podCreationTimestamp="2025-11-28 12:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:16:22.611581188 +0000 UTC m=+1400.553323871" watchObservedRunningTime="2025-11-28 12:16:22.616631425 +0000 UTC m=+1400.558374128" Nov 28 12:16:22 crc kubenswrapper[5030]: I1128 12:16:22.637672 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-2" podStartSLOduration=3.637643964 podStartE2EDuration="3.637643964s" podCreationTimestamp="2025-11-28 12:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:16:22.634586732 +0000 UTC m=+1400.576329425" watchObservedRunningTime="2025-11-28 12:16:22.637643964 +0000 UTC m=+1400.579386637" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.014366 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.015726 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.047795 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.065971 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.349560 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.349628 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.371334 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.404627 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.506374 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.506438 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.519744 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.519815 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.593363 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.609527 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.619552 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.626665 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.638514 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.639056 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.639095 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.639106 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.639115 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.639127 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.639137 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:31 crc kubenswrapper[5030]: I1128 12:16:31.639145 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:33 crc kubenswrapper[5030]: I1128 12:16:33.621936 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:33 crc kubenswrapper[5030]: I1128 12:16:33.633551 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:33 crc kubenswrapper[5030]: I1128 12:16:33.642559 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:33 crc kubenswrapper[5030]: I1128 12:16:33.654191 5030 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 12:16:33 crc kubenswrapper[5030]: I1128 12:16:33.654224 5030 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 12:16:33 crc kubenswrapper[5030]: I1128 12:16:33.654422 5030 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 12:16:33 crc kubenswrapper[5030]: I1128 12:16:33.654460 5030 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 12:16:33 crc kubenswrapper[5030]: I1128 12:16:33.654486 5030 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 12:16:33 crc kubenswrapper[5030]: I1128 12:16:33.675933 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:33 crc kubenswrapper[5030]: I1128 12:16:33.731763 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:33 crc kubenswrapper[5030]: I1128 12:16:33.800934 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:33 crc kubenswrapper[5030]: I1128 12:16:33.892300 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:33 crc kubenswrapper[5030]: I1128 12:16:33.894704 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:34 crc kubenswrapper[5030]: I1128 12:16:34.402320 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 28 12:16:34 crc kubenswrapper[5030]: I1128 12:16:34.420842 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 28 12:16:34 crc kubenswrapper[5030]: I1128 12:16:34.674015 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 28 12:16:34 crc kubenswrapper[5030]: I1128 12:16:34.688453 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:16:35 crc kubenswrapper[5030]: I1128 12:16:35.672054 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="059f6436-7e7e-4d3f-a114-43b7825b175e" containerName="glance-log" containerID="cri-o://4b3ba84c0e48979662e71a4372b7dea88fd512f6fb33862159d500aa78bda527" gracePeriod=30 Nov 28 12:16:35 crc kubenswrapper[5030]: I1128 12:16:35.672109 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="059f6436-7e7e-4d3f-a114-43b7825b175e" containerName="glance-httpd" containerID="cri-o://65738f683417c13505563eea46bbaebc9f083b2e2936ee6b7dbbec45625189e7" gracePeriod=30 Nov 28 12:16:35 crc kubenswrapper[5030]: I1128 12:16:35.672303 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="b18a2f88-8006-4e0b-b55f-e4c873e90614" containerName="glance-httpd" containerID="cri-o://a958895f9d19f80a46fb781ee4cf2ca5fa2dd010142cf4e34297c4f4b68d846f" gracePeriod=30 Nov 28 12:16:35 crc kubenswrapper[5030]: I1128 12:16:35.672299 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="b18a2f88-8006-4e0b-b55f-e4c873e90614" containerName="glance-log" containerID="cri-o://a16621272eb1fc723082b660ae01d6c1459cf41795d04e81e40e0e6d86a0e22f" gracePeriod=30 Nov 28 12:16:35 crc kubenswrapper[5030]: I1128 12:16:35.679773 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="b18a2f88-8006-4e0b-b55f-e4c873e90614" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.125:9292/healthcheck\": EOF" Nov 28 12:16:35 crc kubenswrapper[5030]: I1128 12:16:35.679826 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="b18a2f88-8006-4e0b-b55f-e4c873e90614" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.125:9292/healthcheck\": EOF" Nov 28 12:16:35 crc kubenswrapper[5030]: E1128 12:16:35.869864 5030 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod059f6436_7e7e_4d3f_a114_43b7825b175e.slice/crio-conmon-4b3ba84c0e48979662e71a4372b7dea88fd512f6fb33862159d500aa78bda527.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb18a2f88_8006_4e0b_b55f_e4c873e90614.slice/crio-conmon-a16621272eb1fc723082b660ae01d6c1459cf41795d04e81e40e0e6d86a0e22f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod059f6436_7e7e_4d3f_a114_43b7825b175e.slice/crio-4b3ba84c0e48979662e71a4372b7dea88fd512f6fb33862159d500aa78bda527.scope\": RecentStats: unable to find data in memory cache]" Nov 28 12:16:36 crc kubenswrapper[5030]: I1128 12:16:36.702568 5030 generic.go:334] "Generic (PLEG): container finished" podID="b18a2f88-8006-4e0b-b55f-e4c873e90614" containerID="a16621272eb1fc723082b660ae01d6c1459cf41795d04e81e40e0e6d86a0e22f" exitCode=143 Nov 28 12:16:36 crc kubenswrapper[5030]: I1128 12:16:36.704684 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"b18a2f88-8006-4e0b-b55f-e4c873e90614","Type":"ContainerDied","Data":"a16621272eb1fc723082b660ae01d6c1459cf41795d04e81e40e0e6d86a0e22f"} Nov 28 12:16:36 crc kubenswrapper[5030]: I1128 12:16:36.707030 5030 generic.go:334] "Generic (PLEG): container finished" podID="059f6436-7e7e-4d3f-a114-43b7825b175e" containerID="4b3ba84c0e48979662e71a4372b7dea88fd512f6fb33862159d500aa78bda527" exitCode=143 Nov 28 12:16:36 crc kubenswrapper[5030]: I1128 12:16:36.707606 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="35f838f4-cb87-481a-8265-02831a9749e1" containerName="glance-log" containerID="cri-o://5fe7ca0ca4a26f180e864bdb55a74afbd4c35d5e85a62cd00de34d8ebca93654" gracePeriod=30 Nov 28 12:16:36 crc kubenswrapper[5030]: I1128 12:16:36.707909 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"059f6436-7e7e-4d3f-a114-43b7825b175e","Type":"ContainerDied","Data":"4b3ba84c0e48979662e71a4372b7dea88fd512f6fb33862159d500aa78bda527"} Nov 28 12:16:36 crc kubenswrapper[5030]: I1128 12:16:36.708320 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" containerName="glance-log" containerID="cri-o://1cd33b1d4bade3a30ff1a0f37cb55c003d01cb3e1f3166a7aaaac7fac2554b61" gracePeriod=30 Nov 28 12:16:36 crc kubenswrapper[5030]: I1128 12:16:36.709058 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="35f838f4-cb87-481a-8265-02831a9749e1" containerName="glance-httpd" containerID="cri-o://436c837aaa455d00ad7832d0d3983190e74da414cfbc264ef7f4ca069b655226" gracePeriod=30 Nov 28 12:16:36 crc kubenswrapper[5030]: I1128 12:16:36.709672 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" containerName="glance-httpd" containerID="cri-o://e3b18766410c94e7be5de841b96e21b9bc0d4fe28e8f14130881f3b0375ae806" gracePeriod=30 Nov 28 12:16:37 crc kubenswrapper[5030]: I1128 12:16:37.916960 5030 generic.go:334] "Generic (PLEG): container finished" podID="35f838f4-cb87-481a-8265-02831a9749e1" containerID="5fe7ca0ca4a26f180e864bdb55a74afbd4c35d5e85a62cd00de34d8ebca93654" exitCode=143 Nov 28 12:16:37 crc kubenswrapper[5030]: I1128 12:16:37.917160 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"35f838f4-cb87-481a-8265-02831a9749e1","Type":"ContainerDied","Data":"5fe7ca0ca4a26f180e864bdb55a74afbd4c35d5e85a62cd00de34d8ebca93654"} Nov 28 12:16:37 crc kubenswrapper[5030]: I1128 12:16:37.923693 5030 generic.go:334] "Generic (PLEG): container finished" podID="6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" containerID="1cd33b1d4bade3a30ff1a0f37cb55c003d01cb3e1f3166a7aaaac7fac2554b61" exitCode=143 Nov 28 12:16:37 crc kubenswrapper[5030]: I1128 12:16:37.923747 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85","Type":"ContainerDied","Data":"1cd33b1d4bade3a30ff1a0f37cb55c003d01cb3e1f3166a7aaaac7fac2554b61"} Nov 28 12:16:38 crc kubenswrapper[5030]: I1128 12:16:38.948497 5030 generic.go:334] "Generic (PLEG): container finished" podID="059f6436-7e7e-4d3f-a114-43b7825b175e" containerID="65738f683417c13505563eea46bbaebc9f083b2e2936ee6b7dbbec45625189e7" exitCode=0 Nov 28 12:16:38 crc kubenswrapper[5030]: I1128 12:16:38.948988 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"059f6436-7e7e-4d3f-a114-43b7825b175e","Type":"ContainerDied","Data":"65738f683417c13505563eea46bbaebc9f083b2e2936ee6b7dbbec45625189e7"} Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.290901 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.379404 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-sys\") pod \"059f6436-7e7e-4d3f-a114-43b7825b175e\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.379565 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-var-locks-brick\") pod \"059f6436-7e7e-4d3f-a114-43b7825b175e\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.379647 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-lib-modules\") pod \"059f6436-7e7e-4d3f-a114-43b7825b175e\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.379685 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-sys" (OuterVolumeSpecName: "sys") pod "059f6436-7e7e-4d3f-a114-43b7825b175e" (UID: "059f6436-7e7e-4d3f-a114-43b7825b175e"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.379736 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/059f6436-7e7e-4d3f-a114-43b7825b175e-scripts\") pod \"059f6436-7e7e-4d3f-a114-43b7825b175e\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.379767 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "059f6436-7e7e-4d3f-a114-43b7825b175e" (UID: "059f6436-7e7e-4d3f-a114-43b7825b175e"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.379794 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "059f6436-7e7e-4d3f-a114-43b7825b175e" (UID: "059f6436-7e7e-4d3f-a114-43b7825b175e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.379771 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/059f6436-7e7e-4d3f-a114-43b7825b175e-config-data\") pod \"059f6436-7e7e-4d3f-a114-43b7825b175e\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.379834 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"059f6436-7e7e-4d3f-a114-43b7825b175e\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.379928 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-run\") pod \"059f6436-7e7e-4d3f-a114-43b7825b175e\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.380001 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/059f6436-7e7e-4d3f-a114-43b7825b175e-logs\") pod \"059f6436-7e7e-4d3f-a114-43b7825b175e\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.380244 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx8wm\" (UniqueName: \"kubernetes.io/projected/059f6436-7e7e-4d3f-a114-43b7825b175e-kube-api-access-qx8wm\") pod \"059f6436-7e7e-4d3f-a114-43b7825b175e\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.380290 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/059f6436-7e7e-4d3f-a114-43b7825b175e-httpd-run\") pod \"059f6436-7e7e-4d3f-a114-43b7825b175e\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.380337 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"059f6436-7e7e-4d3f-a114-43b7825b175e\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.380421 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-etc-iscsi\") pod \"059f6436-7e7e-4d3f-a114-43b7825b175e\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.380504 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-etc-nvme\") pod \"059f6436-7e7e-4d3f-a114-43b7825b175e\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.380538 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-dev\") pod \"059f6436-7e7e-4d3f-a114-43b7825b175e\" (UID: \"059f6436-7e7e-4d3f-a114-43b7825b175e\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.380790 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-run" (OuterVolumeSpecName: "run") pod "059f6436-7e7e-4d3f-a114-43b7825b175e" (UID: "059f6436-7e7e-4d3f-a114-43b7825b175e"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.381120 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.381159 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.381180 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.381196 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.381196 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/059f6436-7e7e-4d3f-a114-43b7825b175e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "059f6436-7e7e-4d3f-a114-43b7825b175e" (UID: "059f6436-7e7e-4d3f-a114-43b7825b175e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.381259 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "059f6436-7e7e-4d3f-a114-43b7825b175e" (UID: "059f6436-7e7e-4d3f-a114-43b7825b175e"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.381264 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "059f6436-7e7e-4d3f-a114-43b7825b175e" (UID: "059f6436-7e7e-4d3f-a114-43b7825b175e"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.381281 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-dev" (OuterVolumeSpecName: "dev") pod "059f6436-7e7e-4d3f-a114-43b7825b175e" (UID: "059f6436-7e7e-4d3f-a114-43b7825b175e"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.381296 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/059f6436-7e7e-4d3f-a114-43b7825b175e-logs" (OuterVolumeSpecName: "logs") pod "059f6436-7e7e-4d3f-a114-43b7825b175e" (UID: "059f6436-7e7e-4d3f-a114-43b7825b175e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.386102 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance-cache") pod "059f6436-7e7e-4d3f-a114-43b7825b175e" (UID: "059f6436-7e7e-4d3f-a114-43b7825b175e"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.386129 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "059f6436-7e7e-4d3f-a114-43b7825b175e" (UID: "059f6436-7e7e-4d3f-a114-43b7825b175e"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.386338 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/059f6436-7e7e-4d3f-a114-43b7825b175e-scripts" (OuterVolumeSpecName: "scripts") pod "059f6436-7e7e-4d3f-a114-43b7825b175e" (UID: "059f6436-7e7e-4d3f-a114-43b7825b175e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.386358 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/059f6436-7e7e-4d3f-a114-43b7825b175e-kube-api-access-qx8wm" (OuterVolumeSpecName: "kube-api-access-qx8wm") pod "059f6436-7e7e-4d3f-a114-43b7825b175e" (UID: "059f6436-7e7e-4d3f-a114-43b7825b175e"). InnerVolumeSpecName "kube-api-access-qx8wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.436073 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/059f6436-7e7e-4d3f-a114-43b7825b175e-config-data" (OuterVolumeSpecName: "config-data") pod "059f6436-7e7e-4d3f-a114-43b7825b175e" (UID: "059f6436-7e7e-4d3f-a114-43b7825b175e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.469917 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.483991 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.484034 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.484044 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/059f6436-7e7e-4d3f-a114-43b7825b175e-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.484056 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/059f6436-7e7e-4d3f-a114-43b7825b175e-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.484081 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.484094 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/059f6436-7e7e-4d3f-a114-43b7825b175e-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.484105 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx8wm\" (UniqueName: \"kubernetes.io/projected/059f6436-7e7e-4d3f-a114-43b7825b175e-kube-api-access-qx8wm\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.484116 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/059f6436-7e7e-4d3f-a114-43b7825b175e-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.484131 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.484173 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/059f6436-7e7e-4d3f-a114-43b7825b175e-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.502855 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.509429 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.585395 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b18a2f88-8006-4e0b-b55f-e4c873e90614-config-data\") pod \"b18a2f88-8006-4e0b-b55f-e4c873e90614\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.585598 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b18a2f88-8006-4e0b-b55f-e4c873e90614-httpd-run\") pod \"b18a2f88-8006-4e0b-b55f-e4c873e90614\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.586016 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b18a2f88-8006-4e0b-b55f-e4c873e90614-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b18a2f88-8006-4e0b-b55f-e4c873e90614" (UID: "b18a2f88-8006-4e0b-b55f-e4c873e90614"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.586720 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"b18a2f88-8006-4e0b-b55f-e4c873e90614\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.586905 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-sys\") pod \"b18a2f88-8006-4e0b-b55f-e4c873e90614\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.586974 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-sys" (OuterVolumeSpecName: "sys") pod "b18a2f88-8006-4e0b-b55f-e4c873e90614" (UID: "b18a2f88-8006-4e0b-b55f-e4c873e90614"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.587437 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-etc-iscsi\") pod \"b18a2f88-8006-4e0b-b55f-e4c873e90614\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.587627 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-run\") pod \"b18a2f88-8006-4e0b-b55f-e4c873e90614\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.587750 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-etc-nvme\") pod \"b18a2f88-8006-4e0b-b55f-e4c873e90614\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.587865 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"b18a2f88-8006-4e0b-b55f-e4c873e90614\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.587976 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-dev\") pod \"b18a2f88-8006-4e0b-b55f-e4c873e90614\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.588101 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b18a2f88-8006-4e0b-b55f-e4c873e90614-scripts\") pod \"b18a2f88-8006-4e0b-b55f-e4c873e90614\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.588225 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkfbd\" (UniqueName: \"kubernetes.io/projected/b18a2f88-8006-4e0b-b55f-e4c873e90614-kube-api-access-vkfbd\") pod \"b18a2f88-8006-4e0b-b55f-e4c873e90614\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.588326 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-var-locks-brick\") pod \"b18a2f88-8006-4e0b-b55f-e4c873e90614\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.588423 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-lib-modules\") pod \"b18a2f88-8006-4e0b-b55f-e4c873e90614\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.588563 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b18a2f88-8006-4e0b-b55f-e4c873e90614-logs\") pod \"b18a2f88-8006-4e0b-b55f-e4c873e90614\" (UID: \"b18a2f88-8006-4e0b-b55f-e4c873e90614\") " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.589358 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b18a2f88-8006-4e0b-b55f-e4c873e90614-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.589455 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.589813 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.593648 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.587737 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "b18a2f88-8006-4e0b-b55f-e4c873e90614" (UID: "b18a2f88-8006-4e0b-b55f-e4c873e90614"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.587787 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "b18a2f88-8006-4e0b-b55f-e4c873e90614" (UID: "b18a2f88-8006-4e0b-b55f-e4c873e90614"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.587780 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-run" (OuterVolumeSpecName: "run") pod "b18a2f88-8006-4e0b-b55f-e4c873e90614" (UID: "b18a2f88-8006-4e0b-b55f-e4c873e90614"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.590901 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b18a2f88-8006-4e0b-b55f-e4c873e90614-logs" (OuterVolumeSpecName: "logs") pod "b18a2f88-8006-4e0b-b55f-e4c873e90614" (UID: "b18a2f88-8006-4e0b-b55f-e4c873e90614"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.591271 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "b18a2f88-8006-4e0b-b55f-e4c873e90614" (UID: "b18a2f88-8006-4e0b-b55f-e4c873e90614"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.591309 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b18a2f88-8006-4e0b-b55f-e4c873e90614" (UID: "b18a2f88-8006-4e0b-b55f-e4c873e90614"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.591331 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-dev" (OuterVolumeSpecName: "dev") pod "b18a2f88-8006-4e0b-b55f-e4c873e90614" (UID: "b18a2f88-8006-4e0b-b55f-e4c873e90614"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.591427 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "b18a2f88-8006-4e0b-b55f-e4c873e90614" (UID: "b18a2f88-8006-4e0b-b55f-e4c873e90614"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.593585 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b18a2f88-8006-4e0b-b55f-e4c873e90614-scripts" (OuterVolumeSpecName: "scripts") pod "b18a2f88-8006-4e0b-b55f-e4c873e90614" (UID: "b18a2f88-8006-4e0b-b55f-e4c873e90614"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.593646 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance-cache") pod "b18a2f88-8006-4e0b-b55f-e4c873e90614" (UID: "b18a2f88-8006-4e0b-b55f-e4c873e90614"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.595767 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b18a2f88-8006-4e0b-b55f-e4c873e90614-kube-api-access-vkfbd" (OuterVolumeSpecName: "kube-api-access-vkfbd") pod "b18a2f88-8006-4e0b-b55f-e4c873e90614" (UID: "b18a2f88-8006-4e0b-b55f-e4c873e90614"). InnerVolumeSpecName "kube-api-access-vkfbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.624729 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b18a2f88-8006-4e0b-b55f-e4c873e90614-config-data" (OuterVolumeSpecName: "config-data") pod "b18a2f88-8006-4e0b-b55f-e4c873e90614" (UID: "b18a2f88-8006-4e0b-b55f-e4c873e90614"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.695097 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.695137 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.695154 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.695198 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.695209 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.695220 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b18a2f88-8006-4e0b-b55f-e4c873e90614-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.695233 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkfbd\" (UniqueName: \"kubernetes.io/projected/b18a2f88-8006-4e0b-b55f-e4c873e90614-kube-api-access-vkfbd\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.695244 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.695256 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b18a2f88-8006-4e0b-b55f-e4c873e90614-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.695265 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b18a2f88-8006-4e0b-b55f-e4c873e90614-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.695275 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b18a2f88-8006-4e0b-b55f-e4c873e90614-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.695290 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.708192 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.709056 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.796899 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.796942 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.961768 5030 generic.go:334] "Generic (PLEG): container finished" podID="b18a2f88-8006-4e0b-b55f-e4c873e90614" containerID="a958895f9d19f80a46fb781ee4cf2ca5fa2dd010142cf4e34297c4f4b68d846f" exitCode=0 Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.961842 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"b18a2f88-8006-4e0b-b55f-e4c873e90614","Type":"ContainerDied","Data":"a958895f9d19f80a46fb781ee4cf2ca5fa2dd010142cf4e34297c4f4b68d846f"} Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.961876 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"b18a2f88-8006-4e0b-b55f-e4c873e90614","Type":"ContainerDied","Data":"294832f5f1c48f0ee54238df57828ccdc5f6abccd7b31f7a1f954d83d0814b3e"} Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.961899 5030 scope.go:117] "RemoveContainer" containerID="a958895f9d19f80a46fb781ee4cf2ca5fa2dd010142cf4e34297c4f4b68d846f" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.962026 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.968557 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"059f6436-7e7e-4d3f-a114-43b7825b175e","Type":"ContainerDied","Data":"e4221934761f7d98c420510910cb73b1224dac6090ce740bd5c9e14128ac4192"} Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.968588 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.972878 5030 generic.go:334] "Generic (PLEG): container finished" podID="6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" containerID="e3b18766410c94e7be5de841b96e21b9bc0d4fe28e8f14130881f3b0375ae806" exitCode=0 Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.972956 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85","Type":"ContainerDied","Data":"e3b18766410c94e7be5de841b96e21b9bc0d4fe28e8f14130881f3b0375ae806"} Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.978672 5030 generic.go:334] "Generic (PLEG): container finished" podID="35f838f4-cb87-481a-8265-02831a9749e1" containerID="436c837aaa455d00ad7832d0d3983190e74da414cfbc264ef7f4ca069b655226" exitCode=0 Nov 28 12:16:39 crc kubenswrapper[5030]: I1128 12:16:39.978731 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"35f838f4-cb87-481a-8265-02831a9749e1","Type":"ContainerDied","Data":"436c837aaa455d00ad7832d0d3983190e74da414cfbc264ef7f4ca069b655226"} Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.085575 5030 scope.go:117] "RemoveContainer" containerID="a16621272eb1fc723082b660ae01d6c1459cf41795d04e81e40e0e6d86a0e22f" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.093998 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.119037 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.136735 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.144649 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.148612 5030 scope.go:117] "RemoveContainer" containerID="a958895f9d19f80a46fb781ee4cf2ca5fa2dd010142cf4e34297c4f4b68d846f" Nov 28 12:16:40 crc kubenswrapper[5030]: E1128 12:16:40.149080 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a958895f9d19f80a46fb781ee4cf2ca5fa2dd010142cf4e34297c4f4b68d846f\": container with ID starting with a958895f9d19f80a46fb781ee4cf2ca5fa2dd010142cf4e34297c4f4b68d846f not found: ID does not exist" containerID="a958895f9d19f80a46fb781ee4cf2ca5fa2dd010142cf4e34297c4f4b68d846f" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.149114 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a958895f9d19f80a46fb781ee4cf2ca5fa2dd010142cf4e34297c4f4b68d846f"} err="failed to get container status \"a958895f9d19f80a46fb781ee4cf2ca5fa2dd010142cf4e34297c4f4b68d846f\": rpc error: code = NotFound desc = could not find container \"a958895f9d19f80a46fb781ee4cf2ca5fa2dd010142cf4e34297c4f4b68d846f\": container with ID starting with a958895f9d19f80a46fb781ee4cf2ca5fa2dd010142cf4e34297c4f4b68d846f not found: ID does not exist" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.149171 5030 scope.go:117] "RemoveContainer" containerID="a16621272eb1fc723082b660ae01d6c1459cf41795d04e81e40e0e6d86a0e22f" Nov 28 12:16:40 crc kubenswrapper[5030]: E1128 12:16:40.149435 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a16621272eb1fc723082b660ae01d6c1459cf41795d04e81e40e0e6d86a0e22f\": container with ID starting with a16621272eb1fc723082b660ae01d6c1459cf41795d04e81e40e0e6d86a0e22f not found: ID does not exist" containerID="a16621272eb1fc723082b660ae01d6c1459cf41795d04e81e40e0e6d86a0e22f" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.149483 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a16621272eb1fc723082b660ae01d6c1459cf41795d04e81e40e0e6d86a0e22f"} err="failed to get container status \"a16621272eb1fc723082b660ae01d6c1459cf41795d04e81e40e0e6d86a0e22f\": rpc error: code = NotFound desc = could not find container \"a16621272eb1fc723082b660ae01d6c1459cf41795d04e81e40e0e6d86a0e22f\": container with ID starting with a16621272eb1fc723082b660ae01d6c1459cf41795d04e81e40e0e6d86a0e22f not found: ID does not exist" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.149502 5030 scope.go:117] "RemoveContainer" containerID="65738f683417c13505563eea46bbaebc9f083b2e2936ee6b7dbbec45625189e7" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.181904 5030 scope.go:117] "RemoveContainer" containerID="4b3ba84c0e48979662e71a4372b7dea88fd512f6fb33862159d500aa78bda527" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.280408 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.293399 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.402756 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="059f6436-7e7e-4d3f-a114-43b7825b175e" path="/var/lib/kubelet/pods/059f6436-7e7e-4d3f-a114-43b7825b175e/volumes" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.403673 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b18a2f88-8006-4e0b-b55f-e4c873e90614" path="/var/lib/kubelet/pods/b18a2f88-8006-4e0b-b55f-e4c873e90614/volumes" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.410617 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-run\") pod \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.410774 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"35f838f4-cb87-481a-8265-02831a9749e1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.410903 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f838f4-cb87-481a-8265-02831a9749e1-config-data\") pod \"35f838f4-cb87-481a-8265-02831a9749e1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.410693 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-run" (OuterVolumeSpecName: "run") pod "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" (UID: "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.411038 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-lib-modules\") pod \"35f838f4-cb87-481a-8265-02831a9749e1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.411104 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.411185 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-etc-iscsi\") pod \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.411290 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35f838f4-cb87-481a-8265-02831a9749e1-logs\") pod \"35f838f4-cb87-481a-8265-02831a9749e1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.411370 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"35f838f4-cb87-481a-8265-02831a9749e1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.411445 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-sys\") pod \"35f838f4-cb87-481a-8265-02831a9749e1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.411173 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "35f838f4-cb87-481a-8265-02831a9749e1" (UID: "35f838f4-cb87-481a-8265-02831a9749e1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.411308 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" (UID: "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.411682 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-sys" (OuterVolumeSpecName: "sys") pod "35f838f4-cb87-481a-8265-02831a9749e1" (UID: "35f838f4-cb87-481a-8265-02831a9749e1"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.411683 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-httpd-run\") pod \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.411811 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-logs\") pod \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.411876 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-dev\") pod \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.411934 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-etc-iscsi\") pod \"35f838f4-cb87-481a-8265-02831a9749e1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.411975 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-lib-modules\") pod \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412012 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-var-locks-brick\") pod \"35f838f4-cb87-481a-8265-02831a9749e1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412010 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" (UID: "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412037 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-sys\") pod \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412065 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-etc-nvme\") pod \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412069 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "35f838f4-cb87-481a-8265-02831a9749e1" (UID: "35f838f4-cb87-481a-8265-02831a9749e1"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412118 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35f838f4-cb87-481a-8265-02831a9749e1-httpd-run\") pod \"35f838f4-cb87-481a-8265-02831a9749e1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412142 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412576 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwvng\" (UniqueName: \"kubernetes.io/projected/35f838f4-cb87-481a-8265-02831a9749e1-kube-api-access-jwvng\") pod \"35f838f4-cb87-481a-8265-02831a9749e1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412243 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-sys" (OuterVolumeSpecName: "sys") pod "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" (UID: "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412609 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-var-locks-brick\") pod \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412650 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt9vl\" (UniqueName: \"kubernetes.io/projected/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-kube-api-access-mt9vl\") pod \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412685 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-scripts\") pod \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412714 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f838f4-cb87-481a-8265-02831a9749e1-scripts\") pod \"35f838f4-cb87-481a-8265-02831a9749e1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412744 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-dev\") pod \"35f838f4-cb87-481a-8265-02831a9749e1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412781 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-run\") pod \"35f838f4-cb87-481a-8265-02831a9749e1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412812 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-etc-nvme\") pod \"35f838f4-cb87-481a-8265-02831a9749e1\" (UID: \"35f838f4-cb87-481a-8265-02831a9749e1\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412842 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-config-data\") pod \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\" (UID: \"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85\") " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412272 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" (UID: "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412295 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "35f838f4-cb87-481a-8265-02831a9749e1" (UID: "35f838f4-cb87-481a-8265-02831a9749e1"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412323 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" (UID: "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412434 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-dev" (OuterVolumeSpecName: "dev") pod "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" (UID: "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412459 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-logs" (OuterVolumeSpecName: "logs") pod "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" (UID: "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.412582 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35f838f4-cb87-481a-8265-02831a9749e1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "35f838f4-cb87-481a-8265-02831a9749e1" (UID: "35f838f4-cb87-481a-8265-02831a9749e1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.413260 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35f838f4-cb87-481a-8265-02831a9749e1-logs" (OuterVolumeSpecName: "logs") pod "35f838f4-cb87-481a-8265-02831a9749e1" (UID: "35f838f4-cb87-481a-8265-02831a9749e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.413258 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" (UID: "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.413383 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-run" (OuterVolumeSpecName: "run") pod "35f838f4-cb87-481a-8265-02831a9749e1" (UID: "35f838f4-cb87-481a-8265-02831a9749e1"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.413437 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-dev" (OuterVolumeSpecName: "dev") pod "35f838f4-cb87-481a-8265-02831a9749e1" (UID: "35f838f4-cb87-481a-8265-02831a9749e1"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.413479 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "35f838f4-cb87-481a-8265-02831a9749e1" (UID: "35f838f4-cb87-481a-8265-02831a9749e1"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.413679 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.413774 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.413836 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.413890 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.413943 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.413998 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.414057 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35f838f4-cb87-481a-8265-02831a9749e1-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.414109 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.414159 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.414208 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.414257 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.414307 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.414363 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.414418 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.414494 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35f838f4-cb87-481a-8265-02831a9749e1-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.414558 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/35f838f4-cb87-481a-8265-02831a9749e1-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.414615 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.414666 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.416764 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f838f4-cb87-481a-8265-02831a9749e1-kube-api-access-jwvng" (OuterVolumeSpecName: "kube-api-access-jwvng") pod "35f838f4-cb87-481a-8265-02831a9749e1" (UID: "35f838f4-cb87-481a-8265-02831a9749e1"). InnerVolumeSpecName "kube-api-access-jwvng". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.417311 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "35f838f4-cb87-481a-8265-02831a9749e1" (UID: "35f838f4-cb87-481a-8265-02831a9749e1"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.418138 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f838f4-cb87-481a-8265-02831a9749e1-scripts" (OuterVolumeSpecName: "scripts") pod "35f838f4-cb87-481a-8265-02831a9749e1" (UID: "35f838f4-cb87-481a-8265-02831a9749e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.418359 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-scripts" (OuterVolumeSpecName: "scripts") pod "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" (UID: "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.418361 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance-cache") pod "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" (UID: "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.419423 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage14-crc" (OuterVolumeSpecName: "glance") pod "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" (UID: "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85"). InnerVolumeSpecName "local-storage14-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.424324 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-kube-api-access-mt9vl" (OuterVolumeSpecName: "kube-api-access-mt9vl") pod "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" (UID: "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85"). InnerVolumeSpecName "kube-api-access-mt9vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.424459 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage13-crc" (OuterVolumeSpecName: "glance-cache") pod "35f838f4-cb87-481a-8265-02831a9749e1" (UID: "35f838f4-cb87-481a-8265-02831a9749e1"). InnerVolumeSpecName "local-storage13-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.461633 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-config-data" (OuterVolumeSpecName: "config-data") pod "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" (UID: "6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.462781 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f838f4-cb87-481a-8265-02831a9749e1-config-data" (OuterVolumeSpecName: "config-data") pod "35f838f4-cb87-481a-8265-02831a9749e1" (UID: "35f838f4-cb87-481a-8265-02831a9749e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.516135 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt9vl\" (UniqueName: \"kubernetes.io/projected/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-kube-api-access-mt9vl\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.516242 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.516254 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f838f4-cb87-481a-8265-02831a9749e1-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.516263 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.516304 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.516314 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f838f4-cb87-481a-8265-02831a9749e1-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.516327 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.516340 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.516352 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" " Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.516361 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwvng\" (UniqueName: \"kubernetes.io/projected/35f838f4-cb87-481a-8265-02831a9749e1-kube-api-access-jwvng\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.529561 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.531998 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage14-crc" (UniqueName: "kubernetes.io/local-volume/local-storage14-crc") on node "crc" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.532139 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage13-crc" (UniqueName: "kubernetes.io/local-volume/local-storage13-crc") on node "crc" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.542852 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.618298 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.618354 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.618368 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.618379 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.852960 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d7st4"] Nov 28 12:16:40 crc kubenswrapper[5030]: E1128 12:16:40.853796 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" containerName="glance-log" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.853949 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" containerName="glance-log" Nov 28 12:16:40 crc kubenswrapper[5030]: E1128 12:16:40.854074 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f838f4-cb87-481a-8265-02831a9749e1" containerName="glance-log" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.854799 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f838f4-cb87-481a-8265-02831a9749e1" containerName="glance-log" Nov 28 12:16:40 crc kubenswrapper[5030]: E1128 12:16:40.854894 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" containerName="glance-httpd" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.854906 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" containerName="glance-httpd" Nov 28 12:16:40 crc kubenswrapper[5030]: E1128 12:16:40.854924 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18a2f88-8006-4e0b-b55f-e4c873e90614" containerName="glance-httpd" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.854933 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18a2f88-8006-4e0b-b55f-e4c873e90614" containerName="glance-httpd" Nov 28 12:16:40 crc kubenswrapper[5030]: E1128 12:16:40.854973 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f838f4-cb87-481a-8265-02831a9749e1" containerName="glance-httpd" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.854983 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f838f4-cb87-481a-8265-02831a9749e1" containerName="glance-httpd" Nov 28 12:16:40 crc kubenswrapper[5030]: E1128 12:16:40.855004 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18a2f88-8006-4e0b-b55f-e4c873e90614" containerName="glance-log" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.855016 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18a2f88-8006-4e0b-b55f-e4c873e90614" containerName="glance-log" Nov 28 12:16:40 crc kubenswrapper[5030]: E1128 12:16:40.855032 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059f6436-7e7e-4d3f-a114-43b7825b175e" containerName="glance-log" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.855042 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="059f6436-7e7e-4d3f-a114-43b7825b175e" containerName="glance-log" Nov 28 12:16:40 crc kubenswrapper[5030]: E1128 12:16:40.855064 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="059f6436-7e7e-4d3f-a114-43b7825b175e" containerName="glance-httpd" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.855072 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="059f6436-7e7e-4d3f-a114-43b7825b175e" containerName="glance-httpd" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.855451 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="b18a2f88-8006-4e0b-b55f-e4c873e90614" containerName="glance-httpd" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.855495 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" containerName="glance-httpd" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.855513 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="b18a2f88-8006-4e0b-b55f-e4c873e90614" containerName="glance-log" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.855523 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f838f4-cb87-481a-8265-02831a9749e1" containerName="glance-log" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.855537 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" containerName="glance-log" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.855548 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="059f6436-7e7e-4d3f-a114-43b7825b175e" containerName="glance-log" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.855563 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="059f6436-7e7e-4d3f-a114-43b7825b175e" containerName="glance-httpd" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.855577 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f838f4-cb87-481a-8265-02831a9749e1" containerName="glance-httpd" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.857182 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7st4" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.884729 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7st4"] Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.922739 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1c7cd3c-8576-48e4-a437-792e22b0daa4-catalog-content\") pod \"redhat-operators-d7st4\" (UID: \"b1c7cd3c-8576-48e4-a437-792e22b0daa4\") " pod="openshift-marketplace/redhat-operators-d7st4" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.925936 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1c7cd3c-8576-48e4-a437-792e22b0daa4-utilities\") pod \"redhat-operators-d7st4\" (UID: \"b1c7cd3c-8576-48e4-a437-792e22b0daa4\") " pod="openshift-marketplace/redhat-operators-d7st4" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.926160 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbsqm\" (UniqueName: \"kubernetes.io/projected/b1c7cd3c-8576-48e4-a437-792e22b0daa4-kube-api-access-cbsqm\") pod \"redhat-operators-d7st4\" (UID: \"b1c7cd3c-8576-48e4-a437-792e22b0daa4\") " pod="openshift-marketplace/redhat-operators-d7st4" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.992203 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85","Type":"ContainerDied","Data":"5548e75b8991550d8268e871ab004ef0f5420df77b38368a64861eeb97a3846d"} Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.992273 5030 scope.go:117] "RemoveContainer" containerID="e3b18766410c94e7be5de841b96e21b9bc0d4fe28e8f14130881f3b0375ae806" Nov 28 12:16:40 crc kubenswrapper[5030]: I1128 12:16:40.992452 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Nov 28 12:16:41 crc kubenswrapper[5030]: I1128 12:16:41.001988 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"35f838f4-cb87-481a-8265-02831a9749e1","Type":"ContainerDied","Data":"31af4b641d4c5d3f1208187c960651520861078515bf77c0a56d823bfb5d19a4"} Nov 28 12:16:41 crc kubenswrapper[5030]: I1128 12:16:41.002113 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:16:41 crc kubenswrapper[5030]: I1128 12:16:41.026734 5030 scope.go:117] "RemoveContainer" containerID="1cd33b1d4bade3a30ff1a0f37cb55c003d01cb3e1f3166a7aaaac7fac2554b61" Nov 28 12:16:41 crc kubenswrapper[5030]: I1128 12:16:41.027737 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1c7cd3c-8576-48e4-a437-792e22b0daa4-catalog-content\") pod \"redhat-operators-d7st4\" (UID: \"b1c7cd3c-8576-48e4-a437-792e22b0daa4\") " pod="openshift-marketplace/redhat-operators-d7st4" Nov 28 12:16:41 crc kubenswrapper[5030]: I1128 12:16:41.027813 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1c7cd3c-8576-48e4-a437-792e22b0daa4-utilities\") pod \"redhat-operators-d7st4\" (UID: \"b1c7cd3c-8576-48e4-a437-792e22b0daa4\") " pod="openshift-marketplace/redhat-operators-d7st4" Nov 28 12:16:41 crc kubenswrapper[5030]: I1128 12:16:41.027865 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbsqm\" (UniqueName: \"kubernetes.io/projected/b1c7cd3c-8576-48e4-a437-792e22b0daa4-kube-api-access-cbsqm\") pod \"redhat-operators-d7st4\" (UID: \"b1c7cd3c-8576-48e4-a437-792e22b0daa4\") " pod="openshift-marketplace/redhat-operators-d7st4" Nov 28 12:16:41 crc kubenswrapper[5030]: I1128 12:16:41.028708 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1c7cd3c-8576-48e4-a437-792e22b0daa4-catalog-content\") pod \"redhat-operators-d7st4\" (UID: \"b1c7cd3c-8576-48e4-a437-792e22b0daa4\") " pod="openshift-marketplace/redhat-operators-d7st4" Nov 28 12:16:41 crc kubenswrapper[5030]: I1128 12:16:41.028901 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1c7cd3c-8576-48e4-a437-792e22b0daa4-utilities\") pod \"redhat-operators-d7st4\" (UID: \"b1c7cd3c-8576-48e4-a437-792e22b0daa4\") " pod="openshift-marketplace/redhat-operators-d7st4" Nov 28 12:16:41 crc kubenswrapper[5030]: I1128 12:16:41.056659 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbsqm\" (UniqueName: \"kubernetes.io/projected/b1c7cd3c-8576-48e4-a437-792e22b0daa4-kube-api-access-cbsqm\") pod \"redhat-operators-d7st4\" (UID: \"b1c7cd3c-8576-48e4-a437-792e22b0daa4\") " pod="openshift-marketplace/redhat-operators-d7st4" Nov 28 12:16:41 crc kubenswrapper[5030]: I1128 12:16:41.059030 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 28 12:16:41 crc kubenswrapper[5030]: I1128 12:16:41.068350 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Nov 28 12:16:41 crc kubenswrapper[5030]: I1128 12:16:41.078461 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:16:41 crc kubenswrapper[5030]: I1128 12:16:41.084867 5030 scope.go:117] "RemoveContainer" containerID="436c837aaa455d00ad7832d0d3983190e74da414cfbc264ef7f4ca069b655226" Nov 28 12:16:41 crc kubenswrapper[5030]: I1128 12:16:41.085776 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:16:41 crc kubenswrapper[5030]: I1128 12:16:41.108031 5030 scope.go:117] "RemoveContainer" containerID="5fe7ca0ca4a26f180e864bdb55a74afbd4c35d5e85a62cd00de34d8ebca93654" Nov 28 12:16:41 crc kubenswrapper[5030]: I1128 12:16:41.195823 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7st4" Nov 28 12:16:41 crc kubenswrapper[5030]: I1128 12:16:41.611091 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7st4"] Nov 28 12:16:42 crc kubenswrapper[5030]: I1128 12:16:42.013163 5030 generic.go:334] "Generic (PLEG): container finished" podID="b1c7cd3c-8576-48e4-a437-792e22b0daa4" containerID="0cb21136ac323917d27b72b4698718a23a1b6a6e614c38f86c22c2f6a3f4961d" exitCode=0 Nov 28 12:16:42 crc kubenswrapper[5030]: I1128 12:16:42.013231 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7st4" event={"ID":"b1c7cd3c-8576-48e4-a437-792e22b0daa4","Type":"ContainerDied","Data":"0cb21136ac323917d27b72b4698718a23a1b6a6e614c38f86c22c2f6a3f4961d"} Nov 28 12:16:42 crc kubenswrapper[5030]: I1128 12:16:42.013260 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7st4" event={"ID":"b1c7cd3c-8576-48e4-a437-792e22b0daa4","Type":"ContainerStarted","Data":"ab51126ea5653722456a01c989423b90d8fc667ba28b031d55fecb7e285aa87c"} Nov 28 12:16:42 crc kubenswrapper[5030]: I1128 12:16:42.015911 5030 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 12:16:42 crc kubenswrapper[5030]: I1128 12:16:42.140320 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:16:42 crc kubenswrapper[5030]: I1128 12:16:42.141114 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="fcc16ff7-97d5-4a61-a722-98fb7c811637" containerName="glance-log" containerID="cri-o://cbaf299784251a7af62a131e81d1b365a8987c4fa4a16a165b86cb52d7563513" gracePeriod=30 Nov 28 12:16:42 crc kubenswrapper[5030]: I1128 12:16:42.141194 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="fcc16ff7-97d5-4a61-a722-98fb7c811637" containerName="glance-httpd" containerID="cri-o://e69c398366bd5ca73892124a98ac0718e94dc6efdddbba8d60ae7441a1acdb97" gracePeriod=30 Nov 28 12:16:42 crc kubenswrapper[5030]: I1128 12:16:42.409796 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35f838f4-cb87-481a-8265-02831a9749e1" path="/var/lib/kubelet/pods/35f838f4-cb87-481a-8265-02831a9749e1/volumes" Nov 28 12:16:42 crc kubenswrapper[5030]: I1128 12:16:42.411220 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85" path="/var/lib/kubelet/pods/6e8cc6ab-d2ff-4efb-b3e9-593d36a2fa85/volumes" Nov 28 12:16:42 crc kubenswrapper[5030]: I1128 12:16:42.673209 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:16:42 crc kubenswrapper[5030]: I1128 12:16:42.673753 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="14d40f48-84b0-4e52-878c-941e9433eb63" containerName="glance-log" containerID="cri-o://aba50d022e7b2857b4f69f39375e7451ba658171b73188f76378d6f322bdab43" gracePeriod=30 Nov 28 12:16:42 crc kubenswrapper[5030]: I1128 12:16:42.674390 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="14d40f48-84b0-4e52-878c-941e9433eb63" containerName="glance-httpd" containerID="cri-o://1c9a8692787215c9e6644031d6deb80f03cb45f5c4633d2808a33463220ea85a" gracePeriod=30 Nov 28 12:16:43 crc kubenswrapper[5030]: I1128 12:16:43.030184 5030 generic.go:334] "Generic (PLEG): container finished" podID="14d40f48-84b0-4e52-878c-941e9433eb63" containerID="aba50d022e7b2857b4f69f39375e7451ba658171b73188f76378d6f322bdab43" exitCode=143 Nov 28 12:16:43 crc kubenswrapper[5030]: I1128 12:16:43.030272 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"14d40f48-84b0-4e52-878c-941e9433eb63","Type":"ContainerDied","Data":"aba50d022e7b2857b4f69f39375e7451ba658171b73188f76378d6f322bdab43"} Nov 28 12:16:43 crc kubenswrapper[5030]: I1128 12:16:43.033005 5030 generic.go:334] "Generic (PLEG): container finished" podID="fcc16ff7-97d5-4a61-a722-98fb7c811637" containerID="cbaf299784251a7af62a131e81d1b365a8987c4fa4a16a165b86cb52d7563513" exitCode=143 Nov 28 12:16:43 crc kubenswrapper[5030]: I1128 12:16:43.033133 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"fcc16ff7-97d5-4a61-a722-98fb7c811637","Type":"ContainerDied","Data":"cbaf299784251a7af62a131e81d1b365a8987c4fa4a16a165b86cb52d7563513"} Nov 28 12:16:43 crc kubenswrapper[5030]: I1128 12:16:43.035811 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7st4" event={"ID":"b1c7cd3c-8576-48e4-a437-792e22b0daa4","Type":"ContainerStarted","Data":"1ecb71802fe925d424af6890a7f9e3f38dddf8386165e3ba76acf80e760c5d35"} Nov 28 12:16:44 crc kubenswrapper[5030]: I1128 12:16:44.050369 5030 generic.go:334] "Generic (PLEG): container finished" podID="b1c7cd3c-8576-48e4-a437-792e22b0daa4" containerID="1ecb71802fe925d424af6890a7f9e3f38dddf8386165e3ba76acf80e760c5d35" exitCode=0 Nov 28 12:16:44 crc kubenswrapper[5030]: I1128 12:16:44.050481 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7st4" event={"ID":"b1c7cd3c-8576-48e4-a437-792e22b0daa4","Type":"ContainerDied","Data":"1ecb71802fe925d424af6890a7f9e3f38dddf8386165e3ba76acf80e760c5d35"} Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.063304 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7st4" event={"ID":"b1c7cd3c-8576-48e4-a437-792e22b0daa4","Type":"ContainerStarted","Data":"a76eb2d3d7785684ef8b9e6cd2842abab690eb7bc9aadfac4fec68c036119253"} Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.099006 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d7st4" podStartSLOduration=2.596551727 podStartE2EDuration="5.098977707s" podCreationTimestamp="2025-11-28 12:16:40 +0000 UTC" firstStartedPulling="2025-11-28 12:16:42.015713423 +0000 UTC m=+1419.957456106" lastFinishedPulling="2025-11-28 12:16:44.518139383 +0000 UTC m=+1422.459882086" observedRunningTime="2025-11-28 12:16:45.093823238 +0000 UTC m=+1423.035565921" watchObservedRunningTime="2025-11-28 12:16:45.098977707 +0000 UTC m=+1423.040720390" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.676199 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.821229 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-lib-modules\") pod \"fcc16ff7-97d5-4a61-a722-98fb7c811637\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.821350 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"fcc16ff7-97d5-4a61-a722-98fb7c811637\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.821459 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-run\") pod \"fcc16ff7-97d5-4a61-a722-98fb7c811637\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.821501 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"fcc16ff7-97d5-4a61-a722-98fb7c811637\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.821547 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-var-locks-brick\") pod \"fcc16ff7-97d5-4a61-a722-98fb7c811637\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.821591 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc16ff7-97d5-4a61-a722-98fb7c811637-logs\") pod \"fcc16ff7-97d5-4a61-a722-98fb7c811637\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.821610 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-run" (OuterVolumeSpecName: "run") pod "fcc16ff7-97d5-4a61-a722-98fb7c811637" (UID: "fcc16ff7-97d5-4a61-a722-98fb7c811637"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.821642 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-dev\") pod \"fcc16ff7-97d5-4a61-a722-98fb7c811637\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.821672 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc16ff7-97d5-4a61-a722-98fb7c811637-config-data\") pod \"fcc16ff7-97d5-4a61-a722-98fb7c811637\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.821694 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc16ff7-97d5-4a61-a722-98fb7c811637-scripts\") pod \"fcc16ff7-97d5-4a61-a722-98fb7c811637\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.821698 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-dev" (OuterVolumeSpecName: "dev") pod "fcc16ff7-97d5-4a61-a722-98fb7c811637" (UID: "fcc16ff7-97d5-4a61-a722-98fb7c811637"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.821738 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-etc-nvme\") pod \"fcc16ff7-97d5-4a61-a722-98fb7c811637\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.821770 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-sys\") pod \"fcc16ff7-97d5-4a61-a722-98fb7c811637\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.821717 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "fcc16ff7-97d5-4a61-a722-98fb7c811637" (UID: "fcc16ff7-97d5-4a61-a722-98fb7c811637"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.821818 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "fcc16ff7-97d5-4a61-a722-98fb7c811637" (UID: "fcc16ff7-97d5-4a61-a722-98fb7c811637"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.821888 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-etc-iscsi\") pod \"fcc16ff7-97d5-4a61-a722-98fb7c811637\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.821986 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-sys" (OuterVolumeSpecName: "sys") pod "fcc16ff7-97d5-4a61-a722-98fb7c811637" (UID: "fcc16ff7-97d5-4a61-a722-98fb7c811637"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.822030 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "fcc16ff7-97d5-4a61-a722-98fb7c811637" (UID: "fcc16ff7-97d5-4a61-a722-98fb7c811637"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.822116 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fcc16ff7-97d5-4a61-a722-98fb7c811637-httpd-run\") pod \"fcc16ff7-97d5-4a61-a722-98fb7c811637\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.822115 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc16ff7-97d5-4a61-a722-98fb7c811637-logs" (OuterVolumeSpecName: "logs") pod "fcc16ff7-97d5-4a61-a722-98fb7c811637" (UID: "fcc16ff7-97d5-4a61-a722-98fb7c811637"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.822164 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpgvc\" (UniqueName: \"kubernetes.io/projected/fcc16ff7-97d5-4a61-a722-98fb7c811637-kube-api-access-dpgvc\") pod \"fcc16ff7-97d5-4a61-a722-98fb7c811637\" (UID: \"fcc16ff7-97d5-4a61-a722-98fb7c811637\") " Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.822323 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc16ff7-97d5-4a61-a722-98fb7c811637-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fcc16ff7-97d5-4a61-a722-98fb7c811637" (UID: "fcc16ff7-97d5-4a61-a722-98fb7c811637"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.822601 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.822624 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.822637 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc16ff7-97d5-4a61-a722-98fb7c811637-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.822647 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.822657 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.822669 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.822679 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.822689 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fcc16ff7-97d5-4a61-a722-98fb7c811637-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.821403 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fcc16ff7-97d5-4a61-a722-98fb7c811637" (UID: "fcc16ff7-97d5-4a61-a722-98fb7c811637"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.829905 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "fcc16ff7-97d5-4a61-a722-98fb7c811637" (UID: "fcc16ff7-97d5-4a61-a722-98fb7c811637"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.829992 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc16ff7-97d5-4a61-a722-98fb7c811637-scripts" (OuterVolumeSpecName: "scripts") pod "fcc16ff7-97d5-4a61-a722-98fb7c811637" (UID: "fcc16ff7-97d5-4a61-a722-98fb7c811637"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.832542 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc16ff7-97d5-4a61-a722-98fb7c811637-kube-api-access-dpgvc" (OuterVolumeSpecName: "kube-api-access-dpgvc") pod "fcc16ff7-97d5-4a61-a722-98fb7c811637" (UID: "fcc16ff7-97d5-4a61-a722-98fb7c811637"). InnerVolumeSpecName "kube-api-access-dpgvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.834753 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage18-crc" (OuterVolumeSpecName: "glance-cache") pod "fcc16ff7-97d5-4a61-a722-98fb7c811637" (UID: "fcc16ff7-97d5-4a61-a722-98fb7c811637"). InnerVolumeSpecName "local-storage18-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.866835 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc16ff7-97d5-4a61-a722-98fb7c811637-config-data" (OuterVolumeSpecName: "config-data") pod "fcc16ff7-97d5-4a61-a722-98fb7c811637" (UID: "fcc16ff7-97d5-4a61-a722-98fb7c811637"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.924256 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" " Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.924909 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.924929 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc16ff7-97d5-4a61-a722-98fb7c811637-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.924941 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc16ff7-97d5-4a61-a722-98fb7c811637-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.924971 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpgvc\" (UniqueName: \"kubernetes.io/projected/fcc16ff7-97d5-4a61-a722-98fb7c811637-kube-api-access-dpgvc\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.924986 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fcc16ff7-97d5-4a61-a722-98fb7c811637-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.938940 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage18-crc" (UniqueName: "kubernetes.io/local-volume/local-storage18-crc") on node "crc" Nov 28 12:16:45 crc kubenswrapper[5030]: I1128 12:16:45.944974 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.026764 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.026796 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.078152 5030 generic.go:334] "Generic (PLEG): container finished" podID="fcc16ff7-97d5-4a61-a722-98fb7c811637" containerID="e69c398366bd5ca73892124a98ac0718e94dc6efdddbba8d60ae7441a1acdb97" exitCode=0 Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.078255 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.078241 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"fcc16ff7-97d5-4a61-a722-98fb7c811637","Type":"ContainerDied","Data":"e69c398366bd5ca73892124a98ac0718e94dc6efdddbba8d60ae7441a1acdb97"} Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.078324 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"fcc16ff7-97d5-4a61-a722-98fb7c811637","Type":"ContainerDied","Data":"9c44d055e03d1a39b48097825148878f4336a055dc81432b0076cd1bf44a8f50"} Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.078359 5030 scope.go:117] "RemoveContainer" containerID="e69c398366bd5ca73892124a98ac0718e94dc6efdddbba8d60ae7441a1acdb97" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.085063 5030 generic.go:334] "Generic (PLEG): container finished" podID="14d40f48-84b0-4e52-878c-941e9433eb63" containerID="1c9a8692787215c9e6644031d6deb80f03cb45f5c4633d2808a33463220ea85a" exitCode=0 Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.086337 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"14d40f48-84b0-4e52-878c-941e9433eb63","Type":"ContainerDied","Data":"1c9a8692787215c9e6644031d6deb80f03cb45f5c4633d2808a33463220ea85a"} Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.113882 5030 scope.go:117] "RemoveContainer" containerID="cbaf299784251a7af62a131e81d1b365a8987c4fa4a16a165b86cb52d7563513" Nov 28 12:16:46 crc kubenswrapper[5030]: E1128 12:16:46.120455 5030 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14d40f48_84b0_4e52_878c_941e9433eb63.slice/crio-conmon-1c9a8692787215c9e6644031d6deb80f03cb45f5c4633d2808a33463220ea85a.scope\": RecentStats: unable to find data in memory cache]" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.123805 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.130770 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.148436 5030 scope.go:117] "RemoveContainer" containerID="e69c398366bd5ca73892124a98ac0718e94dc6efdddbba8d60ae7441a1acdb97" Nov 28 12:16:46 crc kubenswrapper[5030]: E1128 12:16:46.149015 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e69c398366bd5ca73892124a98ac0718e94dc6efdddbba8d60ae7441a1acdb97\": container with ID starting with e69c398366bd5ca73892124a98ac0718e94dc6efdddbba8d60ae7441a1acdb97 not found: ID does not exist" containerID="e69c398366bd5ca73892124a98ac0718e94dc6efdddbba8d60ae7441a1acdb97" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.149063 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e69c398366bd5ca73892124a98ac0718e94dc6efdddbba8d60ae7441a1acdb97"} err="failed to get container status \"e69c398366bd5ca73892124a98ac0718e94dc6efdddbba8d60ae7441a1acdb97\": rpc error: code = NotFound desc = could not find container \"e69c398366bd5ca73892124a98ac0718e94dc6efdddbba8d60ae7441a1acdb97\": container with ID starting with e69c398366bd5ca73892124a98ac0718e94dc6efdddbba8d60ae7441a1acdb97 not found: ID does not exist" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.149096 5030 scope.go:117] "RemoveContainer" containerID="cbaf299784251a7af62a131e81d1b365a8987c4fa4a16a165b86cb52d7563513" Nov 28 12:16:46 crc kubenswrapper[5030]: E1128 12:16:46.149598 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbaf299784251a7af62a131e81d1b365a8987c4fa4a16a165b86cb52d7563513\": container with ID starting with cbaf299784251a7af62a131e81d1b365a8987c4fa4a16a165b86cb52d7563513 not found: ID does not exist" containerID="cbaf299784251a7af62a131e81d1b365a8987c4fa4a16a165b86cb52d7563513" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.149631 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbaf299784251a7af62a131e81d1b365a8987c4fa4a16a165b86cb52d7563513"} err="failed to get container status \"cbaf299784251a7af62a131e81d1b365a8987c4fa4a16a165b86cb52d7563513\": rpc error: code = NotFound desc = could not find container \"cbaf299784251a7af62a131e81d1b365a8987c4fa4a16a165b86cb52d7563513\": container with ID starting with cbaf299784251a7af62a131e81d1b365a8987c4fa4a16a165b86cb52d7563513 not found: ID does not exist" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.164277 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.231231 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-run\") pod \"14d40f48-84b0-4e52-878c-941e9433eb63\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.231313 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-etc-nvme\") pod \"14d40f48-84b0-4e52-878c-941e9433eb63\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.231347 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-lib-modules\") pod \"14d40f48-84b0-4e52-878c-941e9433eb63\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.231444 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d40f48-84b0-4e52-878c-941e9433eb63-scripts\") pod \"14d40f48-84b0-4e52-878c-941e9433eb63\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.231456 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "14d40f48-84b0-4e52-878c-941e9433eb63" (UID: "14d40f48-84b0-4e52-878c-941e9433eb63"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.231542 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-var-locks-brick\") pod \"14d40f48-84b0-4e52-878c-941e9433eb63\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.231459 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-run" (OuterVolumeSpecName: "run") pod "14d40f48-84b0-4e52-878c-941e9433eb63" (UID: "14d40f48-84b0-4e52-878c-941e9433eb63"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.231572 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "14d40f48-84b0-4e52-878c-941e9433eb63" (UID: "14d40f48-84b0-4e52-878c-941e9433eb63"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.231576 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14d40f48-84b0-4e52-878c-941e9433eb63-httpd-run\") pod \"14d40f48-84b0-4e52-878c-941e9433eb63\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.231715 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "14d40f48-84b0-4e52-878c-941e9433eb63" (UID: "14d40f48-84b0-4e52-878c-941e9433eb63"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.231836 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d40f48-84b0-4e52-878c-941e9433eb63-config-data\") pod \"14d40f48-84b0-4e52-878c-941e9433eb63\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.231924 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"14d40f48-84b0-4e52-878c-941e9433eb63\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.231973 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-dev\") pod \"14d40f48-84b0-4e52-878c-941e9433eb63\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.232017 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgqhd\" (UniqueName: \"kubernetes.io/projected/14d40f48-84b0-4e52-878c-941e9433eb63-kube-api-access-pgqhd\") pod \"14d40f48-84b0-4e52-878c-941e9433eb63\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.232128 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"14d40f48-84b0-4e52-878c-941e9433eb63\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.232178 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-etc-iscsi\") pod \"14d40f48-84b0-4e52-878c-941e9433eb63\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.232244 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14d40f48-84b0-4e52-878c-941e9433eb63-logs\") pod \"14d40f48-84b0-4e52-878c-941e9433eb63\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.232316 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-sys\") pod \"14d40f48-84b0-4e52-878c-941e9433eb63\" (UID: \"14d40f48-84b0-4e52-878c-941e9433eb63\") " Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.232756 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "14d40f48-84b0-4e52-878c-941e9433eb63" (UID: "14d40f48-84b0-4e52-878c-941e9433eb63"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.232810 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14d40f48-84b0-4e52-878c-941e9433eb63-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "14d40f48-84b0-4e52-878c-941e9433eb63" (UID: "14d40f48-84b0-4e52-878c-941e9433eb63"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.233019 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-dev" (OuterVolumeSpecName: "dev") pod "14d40f48-84b0-4e52-878c-941e9433eb63" (UID: "14d40f48-84b0-4e52-878c-941e9433eb63"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.233884 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.233911 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.233927 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.233938 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.233949 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14d40f48-84b0-4e52-878c-941e9433eb63-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.233958 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.233971 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.233983 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-sys" (OuterVolumeSpecName: "sys") pod "14d40f48-84b0-4e52-878c-941e9433eb63" (UID: "14d40f48-84b0-4e52-878c-941e9433eb63"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.234215 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14d40f48-84b0-4e52-878c-941e9433eb63-logs" (OuterVolumeSpecName: "logs") pod "14d40f48-84b0-4e52-878c-941e9433eb63" (UID: "14d40f48-84b0-4e52-878c-941e9433eb63"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.238624 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14d40f48-84b0-4e52-878c-941e9433eb63-kube-api-access-pgqhd" (OuterVolumeSpecName: "kube-api-access-pgqhd") pod "14d40f48-84b0-4e52-878c-941e9433eb63" (UID: "14d40f48-84b0-4e52-878c-941e9433eb63"). InnerVolumeSpecName "kube-api-access-pgqhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.239017 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14d40f48-84b0-4e52-878c-941e9433eb63-scripts" (OuterVolumeSpecName: "scripts") pod "14d40f48-84b0-4e52-878c-941e9433eb63" (UID: "14d40f48-84b0-4e52-878c-941e9433eb63"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.239200 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "14d40f48-84b0-4e52-878c-941e9433eb63" (UID: "14d40f48-84b0-4e52-878c-941e9433eb63"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.239454 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance-cache") pod "14d40f48-84b0-4e52-878c-941e9433eb63" (UID: "14d40f48-84b0-4e52-878c-941e9433eb63"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.276392 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14d40f48-84b0-4e52-878c-941e9433eb63-config-data" (OuterVolumeSpecName: "config-data") pod "14d40f48-84b0-4e52-878c-941e9433eb63" (UID: "14d40f48-84b0-4e52-878c-941e9433eb63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.335652 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d40f48-84b0-4e52-878c-941e9433eb63-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.335705 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d40f48-84b0-4e52-878c-941e9433eb63-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.335770 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.335784 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgqhd\" (UniqueName: \"kubernetes.io/projected/14d40f48-84b0-4e52-878c-941e9433eb63-kube-api-access-pgqhd\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.335807 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.335817 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14d40f48-84b0-4e52-878c-941e9433eb63-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.335826 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/14d40f48-84b0-4e52-878c-941e9433eb63-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.350048 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.351610 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.402551 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcc16ff7-97d5-4a61-a722-98fb7c811637" path="/var/lib/kubelet/pods/fcc16ff7-97d5-4a61-a722-98fb7c811637/volumes" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.437115 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:46 crc kubenswrapper[5030]: I1128 12:16:46.437167 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:47 crc kubenswrapper[5030]: I1128 12:16:47.093216 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"14d40f48-84b0-4e52-878c-941e9433eb63","Type":"ContainerDied","Data":"1e345c01ece0f06a76846695499fa27e604d024e6b4ad2de6182ddb849acc9fb"} Nov 28 12:16:47 crc kubenswrapper[5030]: I1128 12:16:47.093312 5030 scope.go:117] "RemoveContainer" containerID="1c9a8692787215c9e6644031d6deb80f03cb45f5c4633d2808a33463220ea85a" Nov 28 12:16:47 crc kubenswrapper[5030]: I1128 12:16:47.093656 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:16:47 crc kubenswrapper[5030]: I1128 12:16:47.117414 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:16:47 crc kubenswrapper[5030]: I1128 12:16:47.121998 5030 scope.go:117] "RemoveContainer" containerID="aba50d022e7b2857b4f69f39375e7451ba658171b73188f76378d6f322bdab43" Nov 28 12:16:47 crc kubenswrapper[5030]: I1128 12:16:47.124972 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.232656 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-44kdg"] Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.242731 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-44kdg"] Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.285956 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance82b7-account-delete-x266h"] Nov 28 12:16:48 crc kubenswrapper[5030]: E1128 12:16:48.286259 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc16ff7-97d5-4a61-a722-98fb7c811637" containerName="glance-httpd" Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.286277 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc16ff7-97d5-4a61-a722-98fb7c811637" containerName="glance-httpd" Nov 28 12:16:48 crc kubenswrapper[5030]: E1128 12:16:48.286294 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc16ff7-97d5-4a61-a722-98fb7c811637" containerName="glance-log" Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.286301 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc16ff7-97d5-4a61-a722-98fb7c811637" containerName="glance-log" Nov 28 12:16:48 crc kubenswrapper[5030]: E1128 12:16:48.286325 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14d40f48-84b0-4e52-878c-941e9433eb63" containerName="glance-log" Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.286332 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="14d40f48-84b0-4e52-878c-941e9433eb63" containerName="glance-log" Nov 28 12:16:48 crc kubenswrapper[5030]: E1128 12:16:48.286340 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14d40f48-84b0-4e52-878c-941e9433eb63" containerName="glance-httpd" Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.286346 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="14d40f48-84b0-4e52-878c-941e9433eb63" containerName="glance-httpd" Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.286479 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc16ff7-97d5-4a61-a722-98fb7c811637" containerName="glance-httpd" Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.286531 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="14d40f48-84b0-4e52-878c-941e9433eb63" containerName="glance-log" Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.286541 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc16ff7-97d5-4a61-a722-98fb7c811637" containerName="glance-log" Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.286551 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="14d40f48-84b0-4e52-878c-941e9433eb63" containerName="glance-httpd" Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.287019 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance82b7-account-delete-x266h" Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.300614 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance82b7-account-delete-x266h"] Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.369669 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99094421-56b1-4e09-acf2-771cf9e11ce9-operator-scripts\") pod \"glance82b7-account-delete-x266h\" (UID: \"99094421-56b1-4e09-acf2-771cf9e11ce9\") " pod="glance-kuttl-tests/glance82b7-account-delete-x266h" Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.370167 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msn2k\" (UniqueName: \"kubernetes.io/projected/99094421-56b1-4e09-acf2-771cf9e11ce9-kube-api-access-msn2k\") pod \"glance82b7-account-delete-x266h\" (UID: \"99094421-56b1-4e09-acf2-771cf9e11ce9\") " pod="glance-kuttl-tests/glance82b7-account-delete-x266h" Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.402830 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14d40f48-84b0-4e52-878c-941e9433eb63" path="/var/lib/kubelet/pods/14d40f48-84b0-4e52-878c-941e9433eb63/volumes" Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.403577 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46e15600-df1f-4328-b78a-938c6d7789fc" path="/var/lib/kubelet/pods/46e15600-df1f-4328-b78a-938c6d7789fc/volumes" Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.472002 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msn2k\" (UniqueName: \"kubernetes.io/projected/99094421-56b1-4e09-acf2-771cf9e11ce9-kube-api-access-msn2k\") pod \"glance82b7-account-delete-x266h\" (UID: \"99094421-56b1-4e09-acf2-771cf9e11ce9\") " pod="glance-kuttl-tests/glance82b7-account-delete-x266h" Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.472110 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99094421-56b1-4e09-acf2-771cf9e11ce9-operator-scripts\") pod \"glance82b7-account-delete-x266h\" (UID: \"99094421-56b1-4e09-acf2-771cf9e11ce9\") " pod="glance-kuttl-tests/glance82b7-account-delete-x266h" Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.473251 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99094421-56b1-4e09-acf2-771cf9e11ce9-operator-scripts\") pod \"glance82b7-account-delete-x266h\" (UID: \"99094421-56b1-4e09-acf2-771cf9e11ce9\") " pod="glance-kuttl-tests/glance82b7-account-delete-x266h" Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.496997 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msn2k\" (UniqueName: \"kubernetes.io/projected/99094421-56b1-4e09-acf2-771cf9e11ce9-kube-api-access-msn2k\") pod \"glance82b7-account-delete-x266h\" (UID: \"99094421-56b1-4e09-acf2-771cf9e11ce9\") " pod="glance-kuttl-tests/glance82b7-account-delete-x266h" Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.602178 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance82b7-account-delete-x266h" Nov 28 12:16:48 crc kubenswrapper[5030]: I1128 12:16:48.992693 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance82b7-account-delete-x266h"] Nov 28 12:16:49 crc kubenswrapper[5030]: I1128 12:16:49.114175 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance82b7-account-delete-x266h" event={"ID":"99094421-56b1-4e09-acf2-771cf9e11ce9","Type":"ContainerStarted","Data":"ed16f82eeeba44d36b507e10357e908e2a87f5352c533c2a47fc51edffc61e1a"} Nov 28 12:16:50 crc kubenswrapper[5030]: I1128 12:16:50.125134 5030 generic.go:334] "Generic (PLEG): container finished" podID="99094421-56b1-4e09-acf2-771cf9e11ce9" containerID="db3ce55ae441325cbb66ce7308255af7316f1626da2dc15bf2f011df6581197f" exitCode=0 Nov 28 12:16:50 crc kubenswrapper[5030]: I1128 12:16:50.125574 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance82b7-account-delete-x266h" event={"ID":"99094421-56b1-4e09-acf2-771cf9e11ce9","Type":"ContainerDied","Data":"db3ce55ae441325cbb66ce7308255af7316f1626da2dc15bf2f011df6581197f"} Nov 28 12:16:51 crc kubenswrapper[5030]: I1128 12:16:51.196749 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d7st4" Nov 28 12:16:51 crc kubenswrapper[5030]: I1128 12:16:51.196834 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d7st4" Nov 28 12:16:51 crc kubenswrapper[5030]: I1128 12:16:51.279655 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d7st4" Nov 28 12:16:51 crc kubenswrapper[5030]: I1128 12:16:51.440747 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance82b7-account-delete-x266h" Nov 28 12:16:51 crc kubenswrapper[5030]: I1128 12:16:51.522210 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msn2k\" (UniqueName: \"kubernetes.io/projected/99094421-56b1-4e09-acf2-771cf9e11ce9-kube-api-access-msn2k\") pod \"99094421-56b1-4e09-acf2-771cf9e11ce9\" (UID: \"99094421-56b1-4e09-acf2-771cf9e11ce9\") " Nov 28 12:16:51 crc kubenswrapper[5030]: I1128 12:16:51.522380 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99094421-56b1-4e09-acf2-771cf9e11ce9-operator-scripts\") pod \"99094421-56b1-4e09-acf2-771cf9e11ce9\" (UID: \"99094421-56b1-4e09-acf2-771cf9e11ce9\") " Nov 28 12:16:51 crc kubenswrapper[5030]: I1128 12:16:51.525049 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99094421-56b1-4e09-acf2-771cf9e11ce9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "99094421-56b1-4e09-acf2-771cf9e11ce9" (UID: "99094421-56b1-4e09-acf2-771cf9e11ce9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:16:51 crc kubenswrapper[5030]: I1128 12:16:51.548100 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99094421-56b1-4e09-acf2-771cf9e11ce9-kube-api-access-msn2k" (OuterVolumeSpecName: "kube-api-access-msn2k") pod "99094421-56b1-4e09-acf2-771cf9e11ce9" (UID: "99094421-56b1-4e09-acf2-771cf9e11ce9"). InnerVolumeSpecName "kube-api-access-msn2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:16:51 crc kubenswrapper[5030]: I1128 12:16:51.624626 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msn2k\" (UniqueName: \"kubernetes.io/projected/99094421-56b1-4e09-acf2-771cf9e11ce9-kube-api-access-msn2k\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:51 crc kubenswrapper[5030]: I1128 12:16:51.624661 5030 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99094421-56b1-4e09-acf2-771cf9e11ce9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:52 crc kubenswrapper[5030]: I1128 12:16:52.145374 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance82b7-account-delete-x266h" event={"ID":"99094421-56b1-4e09-acf2-771cf9e11ce9","Type":"ContainerDied","Data":"ed16f82eeeba44d36b507e10357e908e2a87f5352c533c2a47fc51edffc61e1a"} Nov 28 12:16:52 crc kubenswrapper[5030]: I1128 12:16:52.145426 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance82b7-account-delete-x266h" Nov 28 12:16:52 crc kubenswrapper[5030]: I1128 12:16:52.145435 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed16f82eeeba44d36b507e10357e908e2a87f5352c533c2a47fc51edffc61e1a" Nov 28 12:16:52 crc kubenswrapper[5030]: I1128 12:16:52.206176 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d7st4" Nov 28 12:16:52 crc kubenswrapper[5030]: I1128 12:16:52.269239 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d7st4"] Nov 28 12:16:53 crc kubenswrapper[5030]: I1128 12:16:53.327285 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-5p78d"] Nov 28 12:16:53 crc kubenswrapper[5030]: I1128 12:16:53.336152 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-5p78d"] Nov 28 12:16:53 crc kubenswrapper[5030]: I1128 12:16:53.344326 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance82b7-account-delete-x266h"] Nov 28 12:16:53 crc kubenswrapper[5030]: I1128 12:16:53.353944 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-82b7-account-create-update-xkcqd"] Nov 28 12:16:53 crc kubenswrapper[5030]: I1128 12:16:53.361526 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance82b7-account-delete-x266h"] Nov 28 12:16:53 crc kubenswrapper[5030]: I1128 12:16:53.369069 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-82b7-account-create-update-xkcqd"] Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.122258 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-d5db-account-create-update-sf5ck"] Nov 28 12:16:54 crc kubenswrapper[5030]: E1128 12:16:54.123183 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99094421-56b1-4e09-acf2-771cf9e11ce9" containerName="mariadb-account-delete" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.123197 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="99094421-56b1-4e09-acf2-771cf9e11ce9" containerName="mariadb-account-delete" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.123368 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="99094421-56b1-4e09-acf2-771cf9e11ce9" containerName="mariadb-account-delete" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.124043 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-d5db-account-create-update-sf5ck" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.126067 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.133096 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-ctb5m"] Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.134520 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-ctb5m" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.144247 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-d5db-account-create-update-sf5ck"] Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.151965 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-ctb5m"] Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.170732 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d7st4" podUID="b1c7cd3c-8576-48e4-a437-792e22b0daa4" containerName="registry-server" containerID="cri-o://a76eb2d3d7785684ef8b9e6cd2842abab690eb7bc9aadfac4fec68c036119253" gracePeriod=2 Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.270843 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d559fcf7-12fc-4984-8af3-65b7416c572c-operator-scripts\") pod \"glance-d5db-account-create-update-sf5ck\" (UID: \"d559fcf7-12fc-4984-8af3-65b7416c572c\") " pod="glance-kuttl-tests/glance-d5db-account-create-update-sf5ck" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.270920 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7scxd\" (UniqueName: \"kubernetes.io/projected/c9d28ad5-aa34-41cc-8875-0b6395e4b205-kube-api-access-7scxd\") pod \"glance-db-create-ctb5m\" (UID: \"c9d28ad5-aa34-41cc-8875-0b6395e4b205\") " pod="glance-kuttl-tests/glance-db-create-ctb5m" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.270962 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9d28ad5-aa34-41cc-8875-0b6395e4b205-operator-scripts\") pod \"glance-db-create-ctb5m\" (UID: \"c9d28ad5-aa34-41cc-8875-0b6395e4b205\") " pod="glance-kuttl-tests/glance-db-create-ctb5m" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.271024 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6w95\" (UniqueName: \"kubernetes.io/projected/d559fcf7-12fc-4984-8af3-65b7416c572c-kube-api-access-m6w95\") pod \"glance-d5db-account-create-update-sf5ck\" (UID: \"d559fcf7-12fc-4984-8af3-65b7416c572c\") " pod="glance-kuttl-tests/glance-d5db-account-create-update-sf5ck" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.373676 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6w95\" (UniqueName: \"kubernetes.io/projected/d559fcf7-12fc-4984-8af3-65b7416c572c-kube-api-access-m6w95\") pod \"glance-d5db-account-create-update-sf5ck\" (UID: \"d559fcf7-12fc-4984-8af3-65b7416c572c\") " pod="glance-kuttl-tests/glance-d5db-account-create-update-sf5ck" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.373820 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d559fcf7-12fc-4984-8af3-65b7416c572c-operator-scripts\") pod \"glance-d5db-account-create-update-sf5ck\" (UID: \"d559fcf7-12fc-4984-8af3-65b7416c572c\") " pod="glance-kuttl-tests/glance-d5db-account-create-update-sf5ck" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.373890 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7scxd\" (UniqueName: \"kubernetes.io/projected/c9d28ad5-aa34-41cc-8875-0b6395e4b205-kube-api-access-7scxd\") pod \"glance-db-create-ctb5m\" (UID: \"c9d28ad5-aa34-41cc-8875-0b6395e4b205\") " pod="glance-kuttl-tests/glance-db-create-ctb5m" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.373950 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9d28ad5-aa34-41cc-8875-0b6395e4b205-operator-scripts\") pod \"glance-db-create-ctb5m\" (UID: \"c9d28ad5-aa34-41cc-8875-0b6395e4b205\") " pod="glance-kuttl-tests/glance-db-create-ctb5m" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.375453 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9d28ad5-aa34-41cc-8875-0b6395e4b205-operator-scripts\") pod \"glance-db-create-ctb5m\" (UID: \"c9d28ad5-aa34-41cc-8875-0b6395e4b205\") " pod="glance-kuttl-tests/glance-db-create-ctb5m" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.375544 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d559fcf7-12fc-4984-8af3-65b7416c572c-operator-scripts\") pod \"glance-d5db-account-create-update-sf5ck\" (UID: \"d559fcf7-12fc-4984-8af3-65b7416c572c\") " pod="glance-kuttl-tests/glance-d5db-account-create-update-sf5ck" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.395666 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6w95\" (UniqueName: \"kubernetes.io/projected/d559fcf7-12fc-4984-8af3-65b7416c572c-kube-api-access-m6w95\") pod \"glance-d5db-account-create-update-sf5ck\" (UID: \"d559fcf7-12fc-4984-8af3-65b7416c572c\") " pod="glance-kuttl-tests/glance-d5db-account-create-update-sf5ck" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.395669 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7scxd\" (UniqueName: \"kubernetes.io/projected/c9d28ad5-aa34-41cc-8875-0b6395e4b205-kube-api-access-7scxd\") pod \"glance-db-create-ctb5m\" (UID: \"c9d28ad5-aa34-41cc-8875-0b6395e4b205\") " pod="glance-kuttl-tests/glance-db-create-ctb5m" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.403669 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58926cd1-1db9-4ad5-a1fd-4f13e28eec20" path="/var/lib/kubelet/pods/58926cd1-1db9-4ad5-a1fd-4f13e28eec20/volumes" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.404364 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87ddeb74-27df-42cc-aadc-c7d68c79f0c4" path="/var/lib/kubelet/pods/87ddeb74-27df-42cc-aadc-c7d68c79f0c4/volumes" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.405036 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99094421-56b1-4e09-acf2-771cf9e11ce9" path="/var/lib/kubelet/pods/99094421-56b1-4e09-acf2-771cf9e11ce9/volumes" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.461318 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-d5db-account-create-update-sf5ck" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.486500 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-ctb5m" Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.794565 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-ctb5m"] Nov 28 12:16:54 crc kubenswrapper[5030]: I1128 12:16:54.956485 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-d5db-account-create-update-sf5ck"] Nov 28 12:16:55 crc kubenswrapper[5030]: I1128 12:16:55.183987 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-ctb5m" event={"ID":"c9d28ad5-aa34-41cc-8875-0b6395e4b205","Type":"ContainerStarted","Data":"913a0c4d9b9e8cd3b56cce7035396035e57f254752353279f988ff7e62a5cc7a"} Nov 28 12:16:55 crc kubenswrapper[5030]: I1128 12:16:55.186767 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-d5db-account-create-update-sf5ck" event={"ID":"d559fcf7-12fc-4984-8af3-65b7416c572c","Type":"ContainerStarted","Data":"9c10d679ac54b0ebd185102f4a8c45b537ddd680e18cd084015798fc6ce69723"} Nov 28 12:16:55 crc kubenswrapper[5030]: I1128 12:16:55.203941 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-create-ctb5m" podStartSLOduration=1.203916207 podStartE2EDuration="1.203916207s" podCreationTimestamp="2025-11-28 12:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:16:55.199176589 +0000 UTC m=+1433.140919262" watchObservedRunningTime="2025-11-28 12:16:55.203916207 +0000 UTC m=+1433.145658890" Nov 28 12:16:55 crc kubenswrapper[5030]: I1128 12:16:55.219715 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-d5db-account-create-update-sf5ck" podStartSLOduration=1.2196911639999999 podStartE2EDuration="1.219691164s" podCreationTimestamp="2025-11-28 12:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:16:55.219566101 +0000 UTC m=+1433.161308794" watchObservedRunningTime="2025-11-28 12:16:55.219691164 +0000 UTC m=+1433.161433847" Nov 28 12:16:56 crc kubenswrapper[5030]: I1128 12:16:56.199386 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-ctb5m" event={"ID":"c9d28ad5-aa34-41cc-8875-0b6395e4b205","Type":"ContainerStarted","Data":"cb539a9036dbc76fd8be1f623f7ad3e610e49929da53725aaf973cf90165cf77"} Nov 28 12:16:56 crc kubenswrapper[5030]: I1128 12:16:56.202135 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-d5db-account-create-update-sf5ck" event={"ID":"d559fcf7-12fc-4984-8af3-65b7416c572c","Type":"ContainerStarted","Data":"7b939c7d257ece5946fb1fd4f0b0e192f7bcbe3c31e82c7362dfb14d2e29ded7"} Nov 28 12:16:57 crc kubenswrapper[5030]: I1128 12:16:57.228859 5030 generic.go:334] "Generic (PLEG): container finished" podID="b1c7cd3c-8576-48e4-a437-792e22b0daa4" containerID="a76eb2d3d7785684ef8b9e6cd2842abab690eb7bc9aadfac4fec68c036119253" exitCode=0 Nov 28 12:16:57 crc kubenswrapper[5030]: I1128 12:16:57.228992 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7st4" event={"ID":"b1c7cd3c-8576-48e4-a437-792e22b0daa4","Type":"ContainerDied","Data":"a76eb2d3d7785684ef8b9e6cd2842abab690eb7bc9aadfac4fec68c036119253"} Nov 28 12:16:57 crc kubenswrapper[5030]: I1128 12:16:57.232314 5030 generic.go:334] "Generic (PLEG): container finished" podID="c9d28ad5-aa34-41cc-8875-0b6395e4b205" containerID="cb539a9036dbc76fd8be1f623f7ad3e610e49929da53725aaf973cf90165cf77" exitCode=0 Nov 28 12:16:57 crc kubenswrapper[5030]: I1128 12:16:57.232390 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-ctb5m" event={"ID":"c9d28ad5-aa34-41cc-8875-0b6395e4b205","Type":"ContainerDied","Data":"cb539a9036dbc76fd8be1f623f7ad3e610e49929da53725aaf973cf90165cf77"} Nov 28 12:16:57 crc kubenswrapper[5030]: I1128 12:16:57.234328 5030 generic.go:334] "Generic (PLEG): container finished" podID="d559fcf7-12fc-4984-8af3-65b7416c572c" containerID="7b939c7d257ece5946fb1fd4f0b0e192f7bcbe3c31e82c7362dfb14d2e29ded7" exitCode=0 Nov 28 12:16:57 crc kubenswrapper[5030]: I1128 12:16:57.234378 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-d5db-account-create-update-sf5ck" event={"ID":"d559fcf7-12fc-4984-8af3-65b7416c572c","Type":"ContainerDied","Data":"7b939c7d257ece5946fb1fd4f0b0e192f7bcbe3c31e82c7362dfb14d2e29ded7"} Nov 28 12:16:57 crc kubenswrapper[5030]: I1128 12:16:57.407951 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7st4" Nov 28 12:16:57 crc kubenswrapper[5030]: I1128 12:16:57.542282 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1c7cd3c-8576-48e4-a437-792e22b0daa4-catalog-content\") pod \"b1c7cd3c-8576-48e4-a437-792e22b0daa4\" (UID: \"b1c7cd3c-8576-48e4-a437-792e22b0daa4\") " Nov 28 12:16:57 crc kubenswrapper[5030]: I1128 12:16:57.542695 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1c7cd3c-8576-48e4-a437-792e22b0daa4-utilities\") pod \"b1c7cd3c-8576-48e4-a437-792e22b0daa4\" (UID: \"b1c7cd3c-8576-48e4-a437-792e22b0daa4\") " Nov 28 12:16:57 crc kubenswrapper[5030]: I1128 12:16:57.542950 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbsqm\" (UniqueName: \"kubernetes.io/projected/b1c7cd3c-8576-48e4-a437-792e22b0daa4-kube-api-access-cbsqm\") pod \"b1c7cd3c-8576-48e4-a437-792e22b0daa4\" (UID: \"b1c7cd3c-8576-48e4-a437-792e22b0daa4\") " Nov 28 12:16:57 crc kubenswrapper[5030]: I1128 12:16:57.544270 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1c7cd3c-8576-48e4-a437-792e22b0daa4-utilities" (OuterVolumeSpecName: "utilities") pod "b1c7cd3c-8576-48e4-a437-792e22b0daa4" (UID: "b1c7cd3c-8576-48e4-a437-792e22b0daa4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:57 crc kubenswrapper[5030]: I1128 12:16:57.549077 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1c7cd3c-8576-48e4-a437-792e22b0daa4-kube-api-access-cbsqm" (OuterVolumeSpecName: "kube-api-access-cbsqm") pod "b1c7cd3c-8576-48e4-a437-792e22b0daa4" (UID: "b1c7cd3c-8576-48e4-a437-792e22b0daa4"). InnerVolumeSpecName "kube-api-access-cbsqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:16:57 crc kubenswrapper[5030]: I1128 12:16:57.636111 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1c7cd3c-8576-48e4-a437-792e22b0daa4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1c7cd3c-8576-48e4-a437-792e22b0daa4" (UID: "b1c7cd3c-8576-48e4-a437-792e22b0daa4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:16:57 crc kubenswrapper[5030]: I1128 12:16:57.644451 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbsqm\" (UniqueName: \"kubernetes.io/projected/b1c7cd3c-8576-48e4-a437-792e22b0daa4-kube-api-access-cbsqm\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:57 crc kubenswrapper[5030]: I1128 12:16:57.644506 5030 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1c7cd3c-8576-48e4-a437-792e22b0daa4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:57 crc kubenswrapper[5030]: I1128 12:16:57.644515 5030 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1c7cd3c-8576-48e4-a437-792e22b0daa4-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.256787 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7st4" event={"ID":"b1c7cd3c-8576-48e4-a437-792e22b0daa4","Type":"ContainerDied","Data":"ab51126ea5653722456a01c989423b90d8fc667ba28b031d55fecb7e285aa87c"} Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.257009 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7st4" Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.260820 5030 scope.go:117] "RemoveContainer" containerID="a76eb2d3d7785684ef8b9e6cd2842abab690eb7bc9aadfac4fec68c036119253" Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.297147 5030 scope.go:117] "RemoveContainer" containerID="1ecb71802fe925d424af6890a7f9e3f38dddf8386165e3ba76acf80e760c5d35" Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.308669 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d7st4"] Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.318130 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d7st4"] Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.342527 5030 scope.go:117] "RemoveContainer" containerID="0cb21136ac323917d27b72b4698718a23a1b6a6e614c38f86c22c2f6a3f4961d" Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.407910 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1c7cd3c-8576-48e4-a437-792e22b0daa4" path="/var/lib/kubelet/pods/b1c7cd3c-8576-48e4-a437-792e22b0daa4/volumes" Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.647529 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-d5db-account-create-update-sf5ck" Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.653326 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-ctb5m" Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.767058 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6w95\" (UniqueName: \"kubernetes.io/projected/d559fcf7-12fc-4984-8af3-65b7416c572c-kube-api-access-m6w95\") pod \"d559fcf7-12fc-4984-8af3-65b7416c572c\" (UID: \"d559fcf7-12fc-4984-8af3-65b7416c572c\") " Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.767170 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7scxd\" (UniqueName: \"kubernetes.io/projected/c9d28ad5-aa34-41cc-8875-0b6395e4b205-kube-api-access-7scxd\") pod \"c9d28ad5-aa34-41cc-8875-0b6395e4b205\" (UID: \"c9d28ad5-aa34-41cc-8875-0b6395e4b205\") " Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.767284 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d559fcf7-12fc-4984-8af3-65b7416c572c-operator-scripts\") pod \"d559fcf7-12fc-4984-8af3-65b7416c572c\" (UID: \"d559fcf7-12fc-4984-8af3-65b7416c572c\") " Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.767425 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9d28ad5-aa34-41cc-8875-0b6395e4b205-operator-scripts\") pod \"c9d28ad5-aa34-41cc-8875-0b6395e4b205\" (UID: \"c9d28ad5-aa34-41cc-8875-0b6395e4b205\") " Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.767848 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9d28ad5-aa34-41cc-8875-0b6395e4b205-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9d28ad5-aa34-41cc-8875-0b6395e4b205" (UID: "c9d28ad5-aa34-41cc-8875-0b6395e4b205"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.767891 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d559fcf7-12fc-4984-8af3-65b7416c572c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d559fcf7-12fc-4984-8af3-65b7416c572c" (UID: "d559fcf7-12fc-4984-8af3-65b7416c572c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.768161 5030 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d559fcf7-12fc-4984-8af3-65b7416c572c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.768181 5030 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9d28ad5-aa34-41cc-8875-0b6395e4b205-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.775940 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9d28ad5-aa34-41cc-8875-0b6395e4b205-kube-api-access-7scxd" (OuterVolumeSpecName: "kube-api-access-7scxd") pod "c9d28ad5-aa34-41cc-8875-0b6395e4b205" (UID: "c9d28ad5-aa34-41cc-8875-0b6395e4b205"). InnerVolumeSpecName "kube-api-access-7scxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.778585 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d559fcf7-12fc-4984-8af3-65b7416c572c-kube-api-access-m6w95" (OuterVolumeSpecName: "kube-api-access-m6w95") pod "d559fcf7-12fc-4984-8af3-65b7416c572c" (UID: "d559fcf7-12fc-4984-8af3-65b7416c572c"). InnerVolumeSpecName "kube-api-access-m6w95". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.870349 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6w95\" (UniqueName: \"kubernetes.io/projected/d559fcf7-12fc-4984-8af3-65b7416c572c-kube-api-access-m6w95\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:58 crc kubenswrapper[5030]: I1128 12:16:58.870439 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7scxd\" (UniqueName: \"kubernetes.io/projected/c9d28ad5-aa34-41cc-8875-0b6395e4b205-kube-api-access-7scxd\") on node \"crc\" DevicePath \"\"" Nov 28 12:16:59 crc kubenswrapper[5030]: I1128 12:16:59.269387 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-d5db-account-create-update-sf5ck" event={"ID":"d559fcf7-12fc-4984-8af3-65b7416c572c","Type":"ContainerDied","Data":"9c10d679ac54b0ebd185102f4a8c45b537ddd680e18cd084015798fc6ce69723"} Nov 28 12:16:59 crc kubenswrapper[5030]: I1128 12:16:59.269456 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c10d679ac54b0ebd185102f4a8c45b537ddd680e18cd084015798fc6ce69723" Nov 28 12:16:59 crc kubenswrapper[5030]: I1128 12:16:59.269413 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-d5db-account-create-update-sf5ck" Nov 28 12:16:59 crc kubenswrapper[5030]: I1128 12:16:59.272904 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-ctb5m" event={"ID":"c9d28ad5-aa34-41cc-8875-0b6395e4b205","Type":"ContainerDied","Data":"913a0c4d9b9e8cd3b56cce7035396035e57f254752353279f988ff7e62a5cc7a"} Nov 28 12:16:59 crc kubenswrapper[5030]: I1128 12:16:59.272947 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="913a0c4d9b9e8cd3b56cce7035396035e57f254752353279f988ff7e62a5cc7a" Nov 28 12:16:59 crc kubenswrapper[5030]: I1128 12:16:59.273003 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-ctb5m" Nov 28 12:17:03 crc kubenswrapper[5030]: I1128 12:17:03.201754 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:17:03 crc kubenswrapper[5030]: I1128 12:17:03.202744 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.210356 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-jlpsh"] Nov 28 12:17:04 crc kubenswrapper[5030]: E1128 12:17:04.211421 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9d28ad5-aa34-41cc-8875-0b6395e4b205" containerName="mariadb-database-create" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.211452 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9d28ad5-aa34-41cc-8875-0b6395e4b205" containerName="mariadb-database-create" Nov 28 12:17:04 crc kubenswrapper[5030]: E1128 12:17:04.211527 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1c7cd3c-8576-48e4-a437-792e22b0daa4" containerName="extract-content" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.211544 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1c7cd3c-8576-48e4-a437-792e22b0daa4" containerName="extract-content" Nov 28 12:17:04 crc kubenswrapper[5030]: E1128 12:17:04.211580 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1c7cd3c-8576-48e4-a437-792e22b0daa4" containerName="extract-utilities" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.211594 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1c7cd3c-8576-48e4-a437-792e22b0daa4" containerName="extract-utilities" Nov 28 12:17:04 crc kubenswrapper[5030]: E1128 12:17:04.211618 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d559fcf7-12fc-4984-8af3-65b7416c572c" containerName="mariadb-account-create-update" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.211631 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="d559fcf7-12fc-4984-8af3-65b7416c572c" containerName="mariadb-account-create-update" Nov 28 12:17:04 crc kubenswrapper[5030]: E1128 12:17:04.211665 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1c7cd3c-8576-48e4-a437-792e22b0daa4" containerName="registry-server" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.211682 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1c7cd3c-8576-48e4-a437-792e22b0daa4" containerName="registry-server" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.212000 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1c7cd3c-8576-48e4-a437-792e22b0daa4" containerName="registry-server" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.212054 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9d28ad5-aa34-41cc-8875-0b6395e4b205" containerName="mariadb-database-create" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.212080 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="d559fcf7-12fc-4984-8af3-65b7416c572c" containerName="mariadb-account-create-update" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.212926 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-jlpsh" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.215995 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-gq57d" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.223532 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-jlpsh"] Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.265127 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.383067 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b26196be-7779-4fd0-9671-972b80e3d673-db-sync-config-data\") pod \"glance-db-sync-jlpsh\" (UID: \"b26196be-7779-4fd0-9671-972b80e3d673\") " pod="glance-kuttl-tests/glance-db-sync-jlpsh" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.383240 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7grv8\" (UniqueName: \"kubernetes.io/projected/b26196be-7779-4fd0-9671-972b80e3d673-kube-api-access-7grv8\") pod \"glance-db-sync-jlpsh\" (UID: \"b26196be-7779-4fd0-9671-972b80e3d673\") " pod="glance-kuttl-tests/glance-db-sync-jlpsh" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.383298 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26196be-7779-4fd0-9671-972b80e3d673-config-data\") pod \"glance-db-sync-jlpsh\" (UID: \"b26196be-7779-4fd0-9671-972b80e3d673\") " pod="glance-kuttl-tests/glance-db-sync-jlpsh" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.485268 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b26196be-7779-4fd0-9671-972b80e3d673-db-sync-config-data\") pod \"glance-db-sync-jlpsh\" (UID: \"b26196be-7779-4fd0-9671-972b80e3d673\") " pod="glance-kuttl-tests/glance-db-sync-jlpsh" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.485335 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7grv8\" (UniqueName: \"kubernetes.io/projected/b26196be-7779-4fd0-9671-972b80e3d673-kube-api-access-7grv8\") pod \"glance-db-sync-jlpsh\" (UID: \"b26196be-7779-4fd0-9671-972b80e3d673\") " pod="glance-kuttl-tests/glance-db-sync-jlpsh" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.485368 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26196be-7779-4fd0-9671-972b80e3d673-config-data\") pod \"glance-db-sync-jlpsh\" (UID: \"b26196be-7779-4fd0-9671-972b80e3d673\") " pod="glance-kuttl-tests/glance-db-sync-jlpsh" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.492777 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b26196be-7779-4fd0-9671-972b80e3d673-db-sync-config-data\") pod \"glance-db-sync-jlpsh\" (UID: \"b26196be-7779-4fd0-9671-972b80e3d673\") " pod="glance-kuttl-tests/glance-db-sync-jlpsh" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.502158 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26196be-7779-4fd0-9671-972b80e3d673-config-data\") pod \"glance-db-sync-jlpsh\" (UID: \"b26196be-7779-4fd0-9671-972b80e3d673\") " pod="glance-kuttl-tests/glance-db-sync-jlpsh" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.512441 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7grv8\" (UniqueName: \"kubernetes.io/projected/b26196be-7779-4fd0-9671-972b80e3d673-kube-api-access-7grv8\") pod \"glance-db-sync-jlpsh\" (UID: \"b26196be-7779-4fd0-9671-972b80e3d673\") " pod="glance-kuttl-tests/glance-db-sync-jlpsh" Nov 28 12:17:04 crc kubenswrapper[5030]: I1128 12:17:04.590719 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-jlpsh" Nov 28 12:17:05 crc kubenswrapper[5030]: I1128 12:17:05.073759 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-jlpsh"] Nov 28 12:17:05 crc kubenswrapper[5030]: I1128 12:17:05.339840 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-jlpsh" event={"ID":"b26196be-7779-4fd0-9671-972b80e3d673","Type":"ContainerStarted","Data":"79a33e8dc4eb3f93ca4fbc5ff53177dd5fa77532431a545e0d573c853f656a2c"} Nov 28 12:17:06 crc kubenswrapper[5030]: I1128 12:17:06.354502 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-jlpsh" event={"ID":"b26196be-7779-4fd0-9671-972b80e3d673","Type":"ContainerStarted","Data":"5822aee640d6e74d7cf2e863976531fb4cd97b342d02d650cbd63bbe605cf24c"} Nov 28 12:17:06 crc kubenswrapper[5030]: I1128 12:17:06.384235 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-jlpsh" podStartSLOduration=2.384198679 podStartE2EDuration="2.384198679s" podCreationTimestamp="2025-11-28 12:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:17:06.376787558 +0000 UTC m=+1444.318530281" watchObservedRunningTime="2025-11-28 12:17:06.384198679 +0000 UTC m=+1444.325941402" Nov 28 12:17:09 crc kubenswrapper[5030]: I1128 12:17:09.396541 5030 generic.go:334] "Generic (PLEG): container finished" podID="b26196be-7779-4fd0-9671-972b80e3d673" containerID="5822aee640d6e74d7cf2e863976531fb4cd97b342d02d650cbd63bbe605cf24c" exitCode=0 Nov 28 12:17:09 crc kubenswrapper[5030]: I1128 12:17:09.396671 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-jlpsh" event={"ID":"b26196be-7779-4fd0-9671-972b80e3d673","Type":"ContainerDied","Data":"5822aee640d6e74d7cf2e863976531fb4cd97b342d02d650cbd63bbe605cf24c"} Nov 28 12:17:10 crc kubenswrapper[5030]: I1128 12:17:10.753534 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-jlpsh" Nov 28 12:17:10 crc kubenswrapper[5030]: I1128 12:17:10.900727 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26196be-7779-4fd0-9671-972b80e3d673-config-data\") pod \"b26196be-7779-4fd0-9671-972b80e3d673\" (UID: \"b26196be-7779-4fd0-9671-972b80e3d673\") " Nov 28 12:17:10 crc kubenswrapper[5030]: I1128 12:17:10.900955 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b26196be-7779-4fd0-9671-972b80e3d673-db-sync-config-data\") pod \"b26196be-7779-4fd0-9671-972b80e3d673\" (UID: \"b26196be-7779-4fd0-9671-972b80e3d673\") " Nov 28 12:17:10 crc kubenswrapper[5030]: I1128 12:17:10.901090 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7grv8\" (UniqueName: \"kubernetes.io/projected/b26196be-7779-4fd0-9671-972b80e3d673-kube-api-access-7grv8\") pod \"b26196be-7779-4fd0-9671-972b80e3d673\" (UID: \"b26196be-7779-4fd0-9671-972b80e3d673\") " Nov 28 12:17:10 crc kubenswrapper[5030]: I1128 12:17:10.907401 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b26196be-7779-4fd0-9671-972b80e3d673-kube-api-access-7grv8" (OuterVolumeSpecName: "kube-api-access-7grv8") pod "b26196be-7779-4fd0-9671-972b80e3d673" (UID: "b26196be-7779-4fd0-9671-972b80e3d673"). InnerVolumeSpecName "kube-api-access-7grv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:17:10 crc kubenswrapper[5030]: I1128 12:17:10.907528 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26196be-7779-4fd0-9671-972b80e3d673-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b26196be-7779-4fd0-9671-972b80e3d673" (UID: "b26196be-7779-4fd0-9671-972b80e3d673"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:17:10 crc kubenswrapper[5030]: I1128 12:17:10.938959 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26196be-7779-4fd0-9671-972b80e3d673-config-data" (OuterVolumeSpecName: "config-data") pod "b26196be-7779-4fd0-9671-972b80e3d673" (UID: "b26196be-7779-4fd0-9671-972b80e3d673"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:17:11 crc kubenswrapper[5030]: I1128 12:17:11.004048 5030 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b26196be-7779-4fd0-9671-972b80e3d673-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:11 crc kubenswrapper[5030]: I1128 12:17:11.004118 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7grv8\" (UniqueName: \"kubernetes.io/projected/b26196be-7779-4fd0-9671-972b80e3d673-kube-api-access-7grv8\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:11 crc kubenswrapper[5030]: I1128 12:17:11.004143 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26196be-7779-4fd0-9671-972b80e3d673-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:11 crc kubenswrapper[5030]: I1128 12:17:11.422613 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-jlpsh" event={"ID":"b26196be-7779-4fd0-9671-972b80e3d673","Type":"ContainerDied","Data":"79a33e8dc4eb3f93ca4fbc5ff53177dd5fa77532431a545e0d573c853f656a2c"} Nov 28 12:17:11 crc kubenswrapper[5030]: I1128 12:17:11.423158 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79a33e8dc4eb3f93ca4fbc5ff53177dd5fa77532431a545e0d573c853f656a2c" Nov 28 12:17:11 crc kubenswrapper[5030]: I1128 12:17:11.422675 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-jlpsh" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.624923 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:17:12 crc kubenswrapper[5030]: E1128 12:17:12.626799 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26196be-7779-4fd0-9671-972b80e3d673" containerName="glance-db-sync" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.626936 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26196be-7779-4fd0-9671-972b80e3d673" containerName="glance-db-sync" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.627230 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26196be-7779-4fd0-9671-972b80e3d673" containerName="glance-db-sync" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.628304 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.631738 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-gq57d" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.631932 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-single-config-data" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.634413 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.639109 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.731775 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10657ca-8c59-4e17-b108-f8f2048a99d9-config-data\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.731819 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c10657ca-8c59-4e17-b108-f8f2048a99d9-logs\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.731850 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.732047 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-lib-modules\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.732124 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-sys\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.732175 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.732262 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.732381 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-etc-nvme\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.732435 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jvz9\" (UniqueName: \"kubernetes.io/projected/c10657ca-8c59-4e17-b108-f8f2048a99d9-kube-api-access-4jvz9\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.732536 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.732588 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-run\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.732712 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c10657ca-8c59-4e17-b108-f8f2048a99d9-httpd-run\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.732770 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-dev\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.732831 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10657ca-8c59-4e17-b108-f8f2048a99d9-scripts\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.834293 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-lib-modules\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.834356 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-sys\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.834380 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.834403 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.834437 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-etc-nvme\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.834457 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jvz9\" (UniqueName: \"kubernetes.io/projected/c10657ca-8c59-4e17-b108-f8f2048a99d9-kube-api-access-4jvz9\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.834496 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.834498 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-lib-modules\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.834570 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-sys\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.834513 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-run\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.834611 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.834668 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.834698 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-etc-nvme\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.834759 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c10657ca-8c59-4e17-b108-f8f2048a99d9-httpd-run\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.834806 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-dev\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.834845 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10657ca-8c59-4e17-b108-f8f2048a99d9-scripts\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.834963 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10657ca-8c59-4e17-b108-f8f2048a99d9-config-data\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.834981 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c10657ca-8c59-4e17-b108-f8f2048a99d9-logs\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.835047 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.835344 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.835512 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.835997 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-dev\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.834555 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-run\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.836490 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c10657ca-8c59-4e17-b108-f8f2048a99d9-httpd-run\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.841569 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c10657ca-8c59-4e17-b108-f8f2048a99d9-logs\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.842160 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10657ca-8c59-4e17-b108-f8f2048a99d9-scripts\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.842593 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10657ca-8c59-4e17-b108-f8f2048a99d9-config-data\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.856966 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jvz9\" (UniqueName: \"kubernetes.io/projected/c10657ca-8c59-4e17-b108-f8f2048a99d9-kube-api-access-4jvz9\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.858621 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.863940 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-0\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:12 crc kubenswrapper[5030]: I1128 12:17:12.943605 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:13 crc kubenswrapper[5030]: I1128 12:17:13.172660 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:17:13 crc kubenswrapper[5030]: I1128 12:17:13.457958 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"c10657ca-8c59-4e17-b108-f8f2048a99d9","Type":"ContainerStarted","Data":"10737de7b5a9f6867fd66553829c03602161d3659b025a652710b9b236465fd8"} Nov 28 12:17:13 crc kubenswrapper[5030]: I1128 12:17:13.458024 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"c10657ca-8c59-4e17-b108-f8f2048a99d9","Type":"ContainerStarted","Data":"e270e5aa90c1e001f2483758dfcfbd64b11115aa1b87e7fc02c1c2c250fca6c4"} Nov 28 12:17:14 crc kubenswrapper[5030]: I1128 12:17:14.481291 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"c10657ca-8c59-4e17-b108-f8f2048a99d9","Type":"ContainerStarted","Data":"bd2fe84670546d22c6e79679abae47595eb1cb07939c9d6a3c45b7128d5eca7c"} Nov 28 12:17:14 crc kubenswrapper[5030]: I1128 12:17:14.542123 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=2.542061334 podStartE2EDuration="2.542061334s" podCreationTimestamp="2025-11-28 12:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:17:14.526445661 +0000 UTC m=+1452.468188394" watchObservedRunningTime="2025-11-28 12:17:14.542061334 +0000 UTC m=+1452.483804057" Nov 28 12:17:22 crc kubenswrapper[5030]: I1128 12:17:22.944130 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:22 crc kubenswrapper[5030]: I1128 12:17:22.945131 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:22 crc kubenswrapper[5030]: I1128 12:17:22.989363 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:23 crc kubenswrapper[5030]: I1128 12:17:23.013192 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:23 crc kubenswrapper[5030]: I1128 12:17:23.568262 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:23 crc kubenswrapper[5030]: I1128 12:17:23.568323 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:25 crc kubenswrapper[5030]: I1128 12:17:25.541995 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:25 crc kubenswrapper[5030]: I1128 12:17:25.585456 5030 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 12:17:25 crc kubenswrapper[5030]: I1128 12:17:25.758631 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:27 crc kubenswrapper[5030]: I1128 12:17:27.938929 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Nov 28 12:17:27 crc kubenswrapper[5030]: I1128 12:17:27.941340 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:27 crc kubenswrapper[5030]: I1128 12:17:27.957325 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 28 12:17:27 crc kubenswrapper[5030]: I1128 12:17:27.960377 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:27 crc kubenswrapper[5030]: I1128 12:17:27.970907 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Nov 28 12:17:27 crc kubenswrapper[5030]: I1128 12:17:27.981122 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.004142 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.105936 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1de31346-dab3-413f-90b4-1279c3e28bab-httpd-run\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.105987 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.106011 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.106033 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-etc-nvme\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.106055 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcgrl\" (UniqueName: \"kubernetes.io/projected/1de31346-dab3-413f-90b4-1279c3e28bab-kube-api-access-kcgrl\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.106072 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-var-locks-brick\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.106243 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de31346-dab3-413f-90b4-1279c3e28bab-config-data\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.106321 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-etc-nvme\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.106351 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-etc-iscsi\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.106401 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.106428 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-run\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.106512 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-sys\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.106585 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") device mount path \"/mnt/openstack/pv12\"" pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.106604 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.106662 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d81c1836-8212-4dbb-a029-675702077e93-httpd-run\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.106716 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-dev\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.106861 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.106987 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.107080 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-lib-modules\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.107140 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81c1836-8212-4dbb-a029-675702077e93-scripts\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.107212 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81c1836-8212-4dbb-a029-675702077e93-config-data\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.107253 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1de31346-dab3-413f-90b4-1279c3e28bab-logs\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.107310 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d81c1836-8212-4dbb-a029-675702077e93-logs\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.107327 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-run\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.107363 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-lib-modules\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.107392 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-dev\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.107414 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxsdd\" (UniqueName: \"kubernetes.io/projected/d81c1836-8212-4dbb-a029-675702077e93-kube-api-access-sxsdd\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.107495 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-sys\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.107524 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de31346-dab3-413f-90b4-1279c3e28bab-scripts\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.133375 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.209617 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1de31346-dab3-413f-90b4-1279c3e28bab-httpd-run\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.209685 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.209714 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-etc-nvme\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.209746 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcgrl\" (UniqueName: \"kubernetes.io/projected/1de31346-dab3-413f-90b4-1279c3e28bab-kube-api-access-kcgrl\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.209772 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-var-locks-brick\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.209802 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de31346-dab3-413f-90b4-1279c3e28bab-config-data\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.209826 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-etc-nvme\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.209832 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.209853 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-etc-iscsi\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.209844 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-etc-nvme\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.209896 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-var-locks-brick\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.209885 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.209946 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-run\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.209956 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-etc-nvme\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.209975 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-sys\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210013 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210040 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d81c1836-8212-4dbb-a029-675702077e93-httpd-run\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210071 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-dev\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210094 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210116 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210144 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-lib-modules\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210170 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81c1836-8212-4dbb-a029-675702077e93-scripts\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210198 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81c1836-8212-4dbb-a029-675702077e93-config-data\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210216 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1de31346-dab3-413f-90b4-1279c3e28bab-logs\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210241 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d81c1836-8212-4dbb-a029-675702077e93-logs\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210261 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-run\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210291 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-lib-modules\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210313 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-dev\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210332 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxsdd\" (UniqueName: \"kubernetes.io/projected/d81c1836-8212-4dbb-a029-675702077e93-kube-api-access-sxsdd\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210361 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-sys\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210384 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de31346-dab3-413f-90b4-1279c3e28bab-scripts\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210620 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d81c1836-8212-4dbb-a029-675702077e93-httpd-run\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210623 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1de31346-dab3-413f-90b4-1279c3e28bab-httpd-run\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210661 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-lib-modules\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210682 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-etc-iscsi\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210860 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") device mount path \"/mnt/openstack/pv13\"" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210870 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") device mount path \"/mnt/openstack/pv18\"" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210872 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-sys\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210926 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-run\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.210966 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-sys\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.211002 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-run\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.211056 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.211096 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-lib-modules\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.211198 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-dev\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.211391 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-dev\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.211446 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1de31346-dab3-413f-90b4-1279c3e28bab-logs\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.211652 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") device mount path \"/mnt/openstack/pv08\"" pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.211729 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d81c1836-8212-4dbb-a029-675702077e93-logs\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.219307 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81c1836-8212-4dbb-a029-675702077e93-scripts\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.220570 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de31346-dab3-413f-90b4-1279c3e28bab-scripts\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.221229 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81c1836-8212-4dbb-a029-675702077e93-config-data\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.226593 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de31346-dab3-413f-90b4-1279c3e28bab-config-data\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.227387 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcgrl\" (UniqueName: \"kubernetes.io/projected/1de31346-dab3-413f-90b4-1279c3e28bab-kube-api-access-kcgrl\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.231720 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxsdd\" (UniqueName: \"kubernetes.io/projected/d81c1836-8212-4dbb-a029-675702077e93-kube-api-access-sxsdd\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.237943 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.239652 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-single-1\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.253024 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-single-2\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.269339 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.289289 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.792765 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Nov 28 12:17:28 crc kubenswrapper[5030]: I1128 12:17:28.810084 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 28 12:17:28 crc kubenswrapper[5030]: W1128 12:17:28.821825 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1de31346_dab3_413f_90b4_1279c3e28bab.slice/crio-ff865bed254a9159c11c17c2096f40970e374fe394cb58988d680b81caebc78c WatchSource:0}: Error finding container ff865bed254a9159c11c17c2096f40970e374fe394cb58988d680b81caebc78c: Status 404 returned error can't find the container with id ff865bed254a9159c11c17c2096f40970e374fe394cb58988d680b81caebc78c Nov 28 12:17:29 crc kubenswrapper[5030]: I1128 12:17:29.632606 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"1de31346-dab3-413f-90b4-1279c3e28bab","Type":"ContainerStarted","Data":"19cfeadbc72b89b8dc1c1bd527df0d2818bd267ca29d31ccd4393e61d0d45f8b"} Nov 28 12:17:29 crc kubenswrapper[5030]: I1128 12:17:29.633371 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"1de31346-dab3-413f-90b4-1279c3e28bab","Type":"ContainerStarted","Data":"586e075d99bcc94daa58599182e92f475966264c34fbf2707c4ba72866bea526"} Nov 28 12:17:29 crc kubenswrapper[5030]: I1128 12:17:29.633410 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"1de31346-dab3-413f-90b4-1279c3e28bab","Type":"ContainerStarted","Data":"ff865bed254a9159c11c17c2096f40970e374fe394cb58988d680b81caebc78c"} Nov 28 12:17:29 crc kubenswrapper[5030]: I1128 12:17:29.637437 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"d81c1836-8212-4dbb-a029-675702077e93","Type":"ContainerStarted","Data":"cd9201f772d52ce8afd975dc59969a7e4ef5c586d116207237ebba9e431133ee"} Nov 28 12:17:29 crc kubenswrapper[5030]: I1128 12:17:29.637517 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"d81c1836-8212-4dbb-a029-675702077e93","Type":"ContainerStarted","Data":"284398b5ef50820798655f6e769d4c87b2051215e86b303977ab834c290a2bfd"} Nov 28 12:17:29 crc kubenswrapper[5030]: I1128 12:17:29.637540 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"d81c1836-8212-4dbb-a029-675702077e93","Type":"ContainerStarted","Data":"e50302f47510d1a26838b4149af4766dfb9f9bff8eae4ee29aa451cedb50c32c"} Nov 28 12:17:29 crc kubenswrapper[5030]: I1128 12:17:29.664658 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-1" podStartSLOduration=3.664624629 podStartE2EDuration="3.664624629s" podCreationTimestamp="2025-11-28 12:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:17:29.660148788 +0000 UTC m=+1467.601891511" watchObservedRunningTime="2025-11-28 12:17:29.664624629 +0000 UTC m=+1467.606367352" Nov 28 12:17:29 crc kubenswrapper[5030]: I1128 12:17:29.697204 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-2" podStartSLOduration=3.697177862 podStartE2EDuration="3.697177862s" podCreationTimestamp="2025-11-28 12:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:17:29.692985188 +0000 UTC m=+1467.634727911" watchObservedRunningTime="2025-11-28 12:17:29.697177862 +0000 UTC m=+1467.638920555" Nov 28 12:17:33 crc kubenswrapper[5030]: I1128 12:17:33.202181 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:17:33 crc kubenswrapper[5030]: I1128 12:17:33.202863 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:17:38 crc kubenswrapper[5030]: I1128 12:17:38.269848 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:38 crc kubenswrapper[5030]: I1128 12:17:38.270537 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:38 crc kubenswrapper[5030]: I1128 12:17:38.290150 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:38 crc kubenswrapper[5030]: I1128 12:17:38.290235 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:38 crc kubenswrapper[5030]: I1128 12:17:38.307995 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:38 crc kubenswrapper[5030]: I1128 12:17:38.323291 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:38 crc kubenswrapper[5030]: I1128 12:17:38.341505 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:38 crc kubenswrapper[5030]: I1128 12:17:38.376803 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:38 crc kubenswrapper[5030]: I1128 12:17:38.718968 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:38 crc kubenswrapper[5030]: I1128 12:17:38.719354 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:38 crc kubenswrapper[5030]: I1128 12:17:38.719424 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:38 crc kubenswrapper[5030]: I1128 12:17:38.719504 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:40 crc kubenswrapper[5030]: I1128 12:17:40.677901 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:40 crc kubenswrapper[5030]: I1128 12:17:40.683677 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:40 crc kubenswrapper[5030]: I1128 12:17:40.687098 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:40 crc kubenswrapper[5030]: I1128 12:17:40.738749 5030 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 12:17:40 crc kubenswrapper[5030]: I1128 12:17:40.775030 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:41 crc kubenswrapper[5030]: I1128 12:17:41.612537 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Nov 28 12:17:41 crc kubenswrapper[5030]: I1128 12:17:41.621605 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 28 12:17:42 crc kubenswrapper[5030]: I1128 12:17:42.757062 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-2" podUID="d81c1836-8212-4dbb-a029-675702077e93" containerName="glance-log" containerID="cri-o://284398b5ef50820798655f6e769d4c87b2051215e86b303977ab834c290a2bfd" gracePeriod=30 Nov 28 12:17:42 crc kubenswrapper[5030]: I1128 12:17:42.757232 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="1de31346-dab3-413f-90b4-1279c3e28bab" containerName="glance-log" containerID="cri-o://586e075d99bcc94daa58599182e92f475966264c34fbf2707c4ba72866bea526" gracePeriod=30 Nov 28 12:17:42 crc kubenswrapper[5030]: I1128 12:17:42.757197 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-2" podUID="d81c1836-8212-4dbb-a029-675702077e93" containerName="glance-httpd" containerID="cri-o://cd9201f772d52ce8afd975dc59969a7e4ef5c586d116207237ebba9e431133ee" gracePeriod=30 Nov 28 12:17:42 crc kubenswrapper[5030]: I1128 12:17:42.757259 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="1de31346-dab3-413f-90b4-1279c3e28bab" containerName="glance-httpd" containerID="cri-o://19cfeadbc72b89b8dc1c1bd527df0d2818bd267ca29d31ccd4393e61d0d45f8b" gracePeriod=30 Nov 28 12:17:42 crc kubenswrapper[5030]: I1128 12:17:42.770727 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-2" podUID="d81c1836-8212-4dbb-a029-675702077e93" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.135:9292/healthcheck\": EOF" Nov 28 12:17:43 crc kubenswrapper[5030]: I1128 12:17:43.769253 5030 generic.go:334] "Generic (PLEG): container finished" podID="d81c1836-8212-4dbb-a029-675702077e93" containerID="284398b5ef50820798655f6e769d4c87b2051215e86b303977ab834c290a2bfd" exitCode=143 Nov 28 12:17:43 crc kubenswrapper[5030]: I1128 12:17:43.769638 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"d81c1836-8212-4dbb-a029-675702077e93","Type":"ContainerDied","Data":"284398b5ef50820798655f6e769d4c87b2051215e86b303977ab834c290a2bfd"} Nov 28 12:17:43 crc kubenswrapper[5030]: I1128 12:17:43.771710 5030 generic.go:334] "Generic (PLEG): container finished" podID="1de31346-dab3-413f-90b4-1279c3e28bab" containerID="586e075d99bcc94daa58599182e92f475966264c34fbf2707c4ba72866bea526" exitCode=143 Nov 28 12:17:43 crc kubenswrapper[5030]: I1128 12:17:43.771754 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"1de31346-dab3-413f-90b4-1279c3e28bab","Type":"ContainerDied","Data":"586e075d99bcc94daa58599182e92f475966264c34fbf2707c4ba72866bea526"} Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.396277 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.557880 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.558262 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"1de31346-dab3-413f-90b4-1279c3e28bab\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.558309 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1de31346-dab3-413f-90b4-1279c3e28bab-logs\") pod \"1de31346-dab3-413f-90b4-1279c3e28bab\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.558384 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-var-locks-brick\") pod \"1de31346-dab3-413f-90b4-1279c3e28bab\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.558407 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-dev\") pod \"1de31346-dab3-413f-90b4-1279c3e28bab\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.558442 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-lib-modules\") pod \"1de31346-dab3-413f-90b4-1279c3e28bab\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.558496 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de31346-dab3-413f-90b4-1279c3e28bab-config-data\") pod \"1de31346-dab3-413f-90b4-1279c3e28bab\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.558510 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"1de31346-dab3-413f-90b4-1279c3e28bab\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.558565 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de31346-dab3-413f-90b4-1279c3e28bab-scripts\") pod \"1de31346-dab3-413f-90b4-1279c3e28bab\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.558625 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcgrl\" (UniqueName: \"kubernetes.io/projected/1de31346-dab3-413f-90b4-1279c3e28bab-kube-api-access-kcgrl\") pod \"1de31346-dab3-413f-90b4-1279c3e28bab\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.558656 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-run\") pod \"1de31346-dab3-413f-90b4-1279c3e28bab\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.558682 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-etc-nvme\") pod \"1de31346-dab3-413f-90b4-1279c3e28bab\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.558718 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1de31346-dab3-413f-90b4-1279c3e28bab-httpd-run\") pod \"1de31346-dab3-413f-90b4-1279c3e28bab\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.558737 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-sys\") pod \"1de31346-dab3-413f-90b4-1279c3e28bab\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.558758 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-etc-iscsi\") pod \"1de31346-dab3-413f-90b4-1279c3e28bab\" (UID: \"1de31346-dab3-413f-90b4-1279c3e28bab\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.560039 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "1de31346-dab3-413f-90b4-1279c3e28bab" (UID: "1de31346-dab3-413f-90b4-1279c3e28bab"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.560507 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-dev" (OuterVolumeSpecName: "dev") pod "1de31346-dab3-413f-90b4-1279c3e28bab" (UID: "1de31346-dab3-413f-90b4-1279c3e28bab"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.560589 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1de31346-dab3-413f-90b4-1279c3e28bab" (UID: "1de31346-dab3-413f-90b4-1279c3e28bab"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.560650 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-run" (OuterVolumeSpecName: "run") pod "1de31346-dab3-413f-90b4-1279c3e28bab" (UID: "1de31346-dab3-413f-90b4-1279c3e28bab"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.560683 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-sys" (OuterVolumeSpecName: "sys") pod "1de31346-dab3-413f-90b4-1279c3e28bab" (UID: "1de31346-dab3-413f-90b4-1279c3e28bab"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.560698 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "1de31346-dab3-413f-90b4-1279c3e28bab" (UID: "1de31346-dab3-413f-90b4-1279c3e28bab"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.560791 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "1de31346-dab3-413f-90b4-1279c3e28bab" (UID: "1de31346-dab3-413f-90b4-1279c3e28bab"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.560890 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1de31346-dab3-413f-90b4-1279c3e28bab-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1de31346-dab3-413f-90b4-1279c3e28bab" (UID: "1de31346-dab3-413f-90b4-1279c3e28bab"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.560948 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1de31346-dab3-413f-90b4-1279c3e28bab-logs" (OuterVolumeSpecName: "logs") pod "1de31346-dab3-413f-90b4-1279c3e28bab" (UID: "1de31346-dab3-413f-90b4-1279c3e28bab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.565496 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage13-crc" (OuterVolumeSpecName: "glance-cache") pod "1de31346-dab3-413f-90b4-1279c3e28bab" (UID: "1de31346-dab3-413f-90b4-1279c3e28bab"). InnerVolumeSpecName "local-storage13-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.565595 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de31346-dab3-413f-90b4-1279c3e28bab-scripts" (OuterVolumeSpecName: "scripts") pod "1de31346-dab3-413f-90b4-1279c3e28bab" (UID: "1de31346-dab3-413f-90b4-1279c3e28bab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.566266 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1de31346-dab3-413f-90b4-1279c3e28bab-kube-api-access-kcgrl" (OuterVolumeSpecName: "kube-api-access-kcgrl") pod "1de31346-dab3-413f-90b4-1279c3e28bab" (UID: "1de31346-dab3-413f-90b4-1279c3e28bab"). InnerVolumeSpecName "kube-api-access-kcgrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.568651 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage18-crc" (OuterVolumeSpecName: "glance") pod "1de31346-dab3-413f-90b4-1279c3e28bab" (UID: "1de31346-dab3-413f-90b4-1279c3e28bab"). InnerVolumeSpecName "local-storage18-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.600062 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1de31346-dab3-413f-90b4-1279c3e28bab-config-data" (OuterVolumeSpecName: "config-data") pod "1de31346-dab3-413f-90b4-1279c3e28bab" (UID: "1de31346-dab3-413f-90b4-1279c3e28bab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.660363 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"d81c1836-8212-4dbb-a029-675702077e93\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.660459 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxsdd\" (UniqueName: \"kubernetes.io/projected/d81c1836-8212-4dbb-a029-675702077e93-kube-api-access-sxsdd\") pod \"d81c1836-8212-4dbb-a029-675702077e93\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.660594 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-dev\") pod \"d81c1836-8212-4dbb-a029-675702077e93\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.660650 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81c1836-8212-4dbb-a029-675702077e93-scripts\") pod \"d81c1836-8212-4dbb-a029-675702077e93\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.660680 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d81c1836-8212-4dbb-a029-675702077e93-httpd-run\") pod \"d81c1836-8212-4dbb-a029-675702077e93\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.660716 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"d81c1836-8212-4dbb-a029-675702077e93\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.660701 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-dev" (OuterVolumeSpecName: "dev") pod "d81c1836-8212-4dbb-a029-675702077e93" (UID: "d81c1836-8212-4dbb-a029-675702077e93"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.660758 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d81c1836-8212-4dbb-a029-675702077e93-logs\") pod \"d81c1836-8212-4dbb-a029-675702077e93\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.660782 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-etc-iscsi\") pod \"d81c1836-8212-4dbb-a029-675702077e93\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.660832 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81c1836-8212-4dbb-a029-675702077e93-config-data\") pod \"d81c1836-8212-4dbb-a029-675702077e93\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.660887 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-var-locks-brick\") pod \"d81c1836-8212-4dbb-a029-675702077e93\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.660912 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-etc-nvme\") pod \"d81c1836-8212-4dbb-a029-675702077e93\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.660953 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-sys\") pod \"d81c1836-8212-4dbb-a029-675702077e93\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.660975 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-lib-modules\") pod \"d81c1836-8212-4dbb-a029-675702077e93\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.661029 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-run\") pod \"d81c1836-8212-4dbb-a029-675702077e93\" (UID: \"d81c1836-8212-4dbb-a029-675702077e93\") " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.661165 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d81c1836-8212-4dbb-a029-675702077e93-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d81c1836-8212-4dbb-a029-675702077e93" (UID: "d81c1836-8212-4dbb-a029-675702077e93"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.661519 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.661545 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1de31346-dab3-413f-90b4-1279c3e28bab-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.661559 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.661574 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.661585 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.661597 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1de31346-dab3-413f-90b4-1279c3e28bab-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.661619 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.661631 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1de31346-dab3-413f-90b4-1279c3e28bab-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.661642 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.661656 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcgrl\" (UniqueName: \"kubernetes.io/projected/1de31346-dab3-413f-90b4-1279c3e28bab-kube-api-access-kcgrl\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.661668 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.661680 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.661694 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d81c1836-8212-4dbb-a029-675702077e93-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.661709 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1de31346-dab3-413f-90b4-1279c3e28bab-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.661720 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.661732 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1de31346-dab3-413f-90b4-1279c3e28bab-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.663195 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance-cache") pod "d81c1836-8212-4dbb-a029-675702077e93" (UID: "d81c1836-8212-4dbb-a029-675702077e93"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.663281 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-sys" (OuterVolumeSpecName: "sys") pod "d81c1836-8212-4dbb-a029-675702077e93" (UID: "d81c1836-8212-4dbb-a029-675702077e93"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.663305 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "d81c1836-8212-4dbb-a029-675702077e93" (UID: "d81c1836-8212-4dbb-a029-675702077e93"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.663329 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "d81c1836-8212-4dbb-a029-675702077e93" (UID: "d81c1836-8212-4dbb-a029-675702077e93"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.663348 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-run" (OuterVolumeSpecName: "run") pod "d81c1836-8212-4dbb-a029-675702077e93" (UID: "d81c1836-8212-4dbb-a029-675702077e93"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.663365 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d81c1836-8212-4dbb-a029-675702077e93" (UID: "d81c1836-8212-4dbb-a029-675702077e93"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.663548 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d81c1836-8212-4dbb-a029-675702077e93-kube-api-access-sxsdd" (OuterVolumeSpecName: "kube-api-access-sxsdd") pod "d81c1836-8212-4dbb-a029-675702077e93" (UID: "d81c1836-8212-4dbb-a029-675702077e93"). InnerVolumeSpecName "kube-api-access-sxsdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.663932 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d81c1836-8212-4dbb-a029-675702077e93-logs" (OuterVolumeSpecName: "logs") pod "d81c1836-8212-4dbb-a029-675702077e93" (UID: "d81c1836-8212-4dbb-a029-675702077e93"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.663964 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "d81c1836-8212-4dbb-a029-675702077e93" (UID: "d81c1836-8212-4dbb-a029-675702077e93"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.665380 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81c1836-8212-4dbb-a029-675702077e93-scripts" (OuterVolumeSpecName: "scripts") pod "d81c1836-8212-4dbb-a029-675702077e93" (UID: "d81c1836-8212-4dbb-a029-675702077e93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.668067 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "d81c1836-8212-4dbb-a029-675702077e93" (UID: "d81c1836-8212-4dbb-a029-675702077e93"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.678731 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage13-crc" (UniqueName: "kubernetes.io/local-volume/local-storage13-crc") on node "crc" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.678981 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage18-crc" (UniqueName: "kubernetes.io/local-volume/local-storage18-crc") on node "crc" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.694956 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81c1836-8212-4dbb-a029-675702077e93-config-data" (OuterVolumeSpecName: "config-data") pod "d81c1836-8212-4dbb-a029-675702077e93" (UID: "d81c1836-8212-4dbb-a029-675702077e93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.763421 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.763495 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.763508 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxsdd\" (UniqueName: \"kubernetes.io/projected/d81c1836-8212-4dbb-a029-675702077e93-kube-api-access-sxsdd\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.763520 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81c1836-8212-4dbb-a029-675702077e93-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.763544 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.763555 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d81c1836-8212-4dbb-a029-675702077e93-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.763567 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.763578 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81c1836-8212-4dbb-a029-675702077e93-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.763586 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.763594 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.763603 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.763612 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.763620 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.763627 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d81c1836-8212-4dbb-a029-675702077e93-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.776862 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.777066 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.800044 5030 generic.go:334] "Generic (PLEG): container finished" podID="1de31346-dab3-413f-90b4-1279c3e28bab" containerID="19cfeadbc72b89b8dc1c1bd527df0d2818bd267ca29d31ccd4393e61d0d45f8b" exitCode=0 Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.800124 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.800153 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"1de31346-dab3-413f-90b4-1279c3e28bab","Type":"ContainerDied","Data":"19cfeadbc72b89b8dc1c1bd527df0d2818bd267ca29d31ccd4393e61d0d45f8b"} Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.800209 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"1de31346-dab3-413f-90b4-1279c3e28bab","Type":"ContainerDied","Data":"ff865bed254a9159c11c17c2096f40970e374fe394cb58988d680b81caebc78c"} Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.800238 5030 scope.go:117] "RemoveContainer" containerID="19cfeadbc72b89b8dc1c1bd527df0d2818bd267ca29d31ccd4393e61d0d45f8b" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.803781 5030 generic.go:334] "Generic (PLEG): container finished" podID="d81c1836-8212-4dbb-a029-675702077e93" containerID="cd9201f772d52ce8afd975dc59969a7e4ef5c586d116207237ebba9e431133ee" exitCode=0 Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.803838 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"d81c1836-8212-4dbb-a029-675702077e93","Type":"ContainerDied","Data":"cd9201f772d52ce8afd975dc59969a7e4ef5c586d116207237ebba9e431133ee"} Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.803875 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"d81c1836-8212-4dbb-a029-675702077e93","Type":"ContainerDied","Data":"e50302f47510d1a26838b4149af4766dfb9f9bff8eae4ee29aa451cedb50c32c"} Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.804042 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-2" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.840430 5030 scope.go:117] "RemoveContainer" containerID="586e075d99bcc94daa58599182e92f475966264c34fbf2707c4ba72866bea526" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.859583 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.864876 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.864930 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.865994 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.867781 5030 scope.go:117] "RemoveContainer" containerID="19cfeadbc72b89b8dc1c1bd527df0d2818bd267ca29d31ccd4393e61d0d45f8b" Nov 28 12:17:46 crc kubenswrapper[5030]: E1128 12:17:46.868165 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19cfeadbc72b89b8dc1c1bd527df0d2818bd267ca29d31ccd4393e61d0d45f8b\": container with ID starting with 19cfeadbc72b89b8dc1c1bd527df0d2818bd267ca29d31ccd4393e61d0d45f8b not found: ID does not exist" containerID="19cfeadbc72b89b8dc1c1bd527df0d2818bd267ca29d31ccd4393e61d0d45f8b" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.868235 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19cfeadbc72b89b8dc1c1bd527df0d2818bd267ca29d31ccd4393e61d0d45f8b"} err="failed to get container status \"19cfeadbc72b89b8dc1c1bd527df0d2818bd267ca29d31ccd4393e61d0d45f8b\": rpc error: code = NotFound desc = could not find container \"19cfeadbc72b89b8dc1c1bd527df0d2818bd267ca29d31ccd4393e61d0d45f8b\": container with ID starting with 19cfeadbc72b89b8dc1c1bd527df0d2818bd267ca29d31ccd4393e61d0d45f8b not found: ID does not exist" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.868276 5030 scope.go:117] "RemoveContainer" containerID="586e075d99bcc94daa58599182e92f475966264c34fbf2707c4ba72866bea526" Nov 28 12:17:46 crc kubenswrapper[5030]: E1128 12:17:46.868634 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"586e075d99bcc94daa58599182e92f475966264c34fbf2707c4ba72866bea526\": container with ID starting with 586e075d99bcc94daa58599182e92f475966264c34fbf2707c4ba72866bea526 not found: ID does not exist" containerID="586e075d99bcc94daa58599182e92f475966264c34fbf2707c4ba72866bea526" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.868676 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"586e075d99bcc94daa58599182e92f475966264c34fbf2707c4ba72866bea526"} err="failed to get container status \"586e075d99bcc94daa58599182e92f475966264c34fbf2707c4ba72866bea526\": rpc error: code = NotFound desc = could not find container \"586e075d99bcc94daa58599182e92f475966264c34fbf2707c4ba72866bea526\": container with ID starting with 586e075d99bcc94daa58599182e92f475966264c34fbf2707c4ba72866bea526 not found: ID does not exist" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.868706 5030 scope.go:117] "RemoveContainer" containerID="cd9201f772d52ce8afd975dc59969a7e4ef5c586d116207237ebba9e431133ee" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.873868 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.880533 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.886587 5030 scope.go:117] "RemoveContainer" containerID="284398b5ef50820798655f6e769d4c87b2051215e86b303977ab834c290a2bfd" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.903251 5030 scope.go:117] "RemoveContainer" containerID="cd9201f772d52ce8afd975dc59969a7e4ef5c586d116207237ebba9e431133ee" Nov 28 12:17:46 crc kubenswrapper[5030]: E1128 12:17:46.903867 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd9201f772d52ce8afd975dc59969a7e4ef5c586d116207237ebba9e431133ee\": container with ID starting with cd9201f772d52ce8afd975dc59969a7e4ef5c586d116207237ebba9e431133ee not found: ID does not exist" containerID="cd9201f772d52ce8afd975dc59969a7e4ef5c586d116207237ebba9e431133ee" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.903906 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd9201f772d52ce8afd975dc59969a7e4ef5c586d116207237ebba9e431133ee"} err="failed to get container status \"cd9201f772d52ce8afd975dc59969a7e4ef5c586d116207237ebba9e431133ee\": rpc error: code = NotFound desc = could not find container \"cd9201f772d52ce8afd975dc59969a7e4ef5c586d116207237ebba9e431133ee\": container with ID starting with cd9201f772d52ce8afd975dc59969a7e4ef5c586d116207237ebba9e431133ee not found: ID does not exist" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.903937 5030 scope.go:117] "RemoveContainer" containerID="284398b5ef50820798655f6e769d4c87b2051215e86b303977ab834c290a2bfd" Nov 28 12:17:46 crc kubenswrapper[5030]: E1128 12:17:46.904203 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"284398b5ef50820798655f6e769d4c87b2051215e86b303977ab834c290a2bfd\": container with ID starting with 284398b5ef50820798655f6e769d4c87b2051215e86b303977ab834c290a2bfd not found: ID does not exist" containerID="284398b5ef50820798655f6e769d4c87b2051215e86b303977ab834c290a2bfd" Nov 28 12:17:46 crc kubenswrapper[5030]: I1128 12:17:46.904238 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"284398b5ef50820798655f6e769d4c87b2051215e86b303977ab834c290a2bfd"} err="failed to get container status \"284398b5ef50820798655f6e769d4c87b2051215e86b303977ab834c290a2bfd\": rpc error: code = NotFound desc = could not find container \"284398b5ef50820798655f6e769d4c87b2051215e86b303977ab834c290a2bfd\": container with ID starting with 284398b5ef50820798655f6e769d4c87b2051215e86b303977ab834c290a2bfd not found: ID does not exist" Nov 28 12:17:47 crc kubenswrapper[5030]: I1128 12:17:47.795988 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:17:47 crc kubenswrapper[5030]: I1128 12:17:47.796735 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="c10657ca-8c59-4e17-b108-f8f2048a99d9" containerName="glance-log" containerID="cri-o://10737de7b5a9f6867fd66553829c03602161d3659b025a652710b9b236465fd8" gracePeriod=30 Nov 28 12:17:47 crc kubenswrapper[5030]: I1128 12:17:47.796921 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="c10657ca-8c59-4e17-b108-f8f2048a99d9" containerName="glance-httpd" containerID="cri-o://bd2fe84670546d22c6e79679abae47595eb1cb07939c9d6a3c45b7128d5eca7c" gracePeriod=30 Nov 28 12:17:48 crc kubenswrapper[5030]: I1128 12:17:48.401004 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1de31346-dab3-413f-90b4-1279c3e28bab" path="/var/lib/kubelet/pods/1de31346-dab3-413f-90b4-1279c3e28bab/volumes" Nov 28 12:17:48 crc kubenswrapper[5030]: I1128 12:17:48.402178 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d81c1836-8212-4dbb-a029-675702077e93" path="/var/lib/kubelet/pods/d81c1836-8212-4dbb-a029-675702077e93/volumes" Nov 28 12:17:48 crc kubenswrapper[5030]: I1128 12:17:48.843209 5030 generic.go:334] "Generic (PLEG): container finished" podID="c10657ca-8c59-4e17-b108-f8f2048a99d9" containerID="10737de7b5a9f6867fd66553829c03602161d3659b025a652710b9b236465fd8" exitCode=143 Nov 28 12:17:48 crc kubenswrapper[5030]: I1128 12:17:48.843278 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"c10657ca-8c59-4e17-b108-f8f2048a99d9","Type":"ContainerDied","Data":"10737de7b5a9f6867fd66553829c03602161d3659b025a652710b9b236465fd8"} Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.430776 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.540312 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jvz9\" (UniqueName: \"kubernetes.io/projected/c10657ca-8c59-4e17-b108-f8f2048a99d9-kube-api-access-4jvz9\") pod \"c10657ca-8c59-4e17-b108-f8f2048a99d9\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.540443 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-sys\") pod \"c10657ca-8c59-4e17-b108-f8f2048a99d9\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.540625 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-sys" (OuterVolumeSpecName: "sys") pod "c10657ca-8c59-4e17-b108-f8f2048a99d9" (UID: "c10657ca-8c59-4e17-b108-f8f2048a99d9"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.540693 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-var-locks-brick\") pod \"c10657ca-8c59-4e17-b108-f8f2048a99d9\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.540739 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-dev\") pod \"c10657ca-8c59-4e17-b108-f8f2048a99d9\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.540764 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "c10657ca-8c59-4e17-b108-f8f2048a99d9" (UID: "c10657ca-8c59-4e17-b108-f8f2048a99d9"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.540892 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-lib-modules\") pod \"c10657ca-8c59-4e17-b108-f8f2048a99d9\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.540933 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"c10657ca-8c59-4e17-b108-f8f2048a99d9\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.540880 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-dev" (OuterVolumeSpecName: "dev") pod "c10657ca-8c59-4e17-b108-f8f2048a99d9" (UID: "c10657ca-8c59-4e17-b108-f8f2048a99d9"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.540969 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c10657ca-8c59-4e17-b108-f8f2048a99d9" (UID: "c10657ca-8c59-4e17-b108-f8f2048a99d9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.540975 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10657ca-8c59-4e17-b108-f8f2048a99d9-scripts\") pod \"c10657ca-8c59-4e17-b108-f8f2048a99d9\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.541148 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10657ca-8c59-4e17-b108-f8f2048a99d9-config-data\") pod \"c10657ca-8c59-4e17-b108-f8f2048a99d9\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.541265 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-etc-nvme\") pod \"c10657ca-8c59-4e17-b108-f8f2048a99d9\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.541344 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "c10657ca-8c59-4e17-b108-f8f2048a99d9" (UID: "c10657ca-8c59-4e17-b108-f8f2048a99d9"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.541342 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"c10657ca-8c59-4e17-b108-f8f2048a99d9\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.541467 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-etc-iscsi\") pod \"c10657ca-8c59-4e17-b108-f8f2048a99d9\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.541565 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c10657ca-8c59-4e17-b108-f8f2048a99d9-httpd-run\") pod \"c10657ca-8c59-4e17-b108-f8f2048a99d9\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.541573 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "c10657ca-8c59-4e17-b108-f8f2048a99d9" (UID: "c10657ca-8c59-4e17-b108-f8f2048a99d9"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.541592 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-run\") pod \"c10657ca-8c59-4e17-b108-f8f2048a99d9\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.541669 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c10657ca-8c59-4e17-b108-f8f2048a99d9-logs\") pod \"c10657ca-8c59-4e17-b108-f8f2048a99d9\" (UID: \"c10657ca-8c59-4e17-b108-f8f2048a99d9\") " Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.541770 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-run" (OuterVolumeSpecName: "run") pod "c10657ca-8c59-4e17-b108-f8f2048a99d9" (UID: "c10657ca-8c59-4e17-b108-f8f2048a99d9"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.542432 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c10657ca-8c59-4e17-b108-f8f2048a99d9-logs" (OuterVolumeSpecName: "logs") pod "c10657ca-8c59-4e17-b108-f8f2048a99d9" (UID: "c10657ca-8c59-4e17-b108-f8f2048a99d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.542518 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.542539 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.542552 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.542562 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.542572 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.542582 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.542595 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c10657ca-8c59-4e17-b108-f8f2048a99d9-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.543537 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c10657ca-8c59-4e17-b108-f8f2048a99d9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c10657ca-8c59-4e17-b108-f8f2048a99d9" (UID: "c10657ca-8c59-4e17-b108-f8f2048a99d9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.547846 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10657ca-8c59-4e17-b108-f8f2048a99d9-scripts" (OuterVolumeSpecName: "scripts") pod "c10657ca-8c59-4e17-b108-f8f2048a99d9" (UID: "c10657ca-8c59-4e17-b108-f8f2048a99d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.548446 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance-cache") pod "c10657ca-8c59-4e17-b108-f8f2048a99d9" (UID: "c10657ca-8c59-4e17-b108-f8f2048a99d9"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.548990 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10657ca-8c59-4e17-b108-f8f2048a99d9-kube-api-access-4jvz9" (OuterVolumeSpecName: "kube-api-access-4jvz9") pod "c10657ca-8c59-4e17-b108-f8f2048a99d9" (UID: "c10657ca-8c59-4e17-b108-f8f2048a99d9"). InnerVolumeSpecName "kube-api-access-4jvz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.550093 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "c10657ca-8c59-4e17-b108-f8f2048a99d9" (UID: "c10657ca-8c59-4e17-b108-f8f2048a99d9"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.593788 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10657ca-8c59-4e17-b108-f8f2048a99d9-config-data" (OuterVolumeSpecName: "config-data") pod "c10657ca-8c59-4e17-b108-f8f2048a99d9" (UID: "c10657ca-8c59-4e17-b108-f8f2048a99d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.644347 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c10657ca-8c59-4e17-b108-f8f2048a99d9-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.644410 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jvz9\" (UniqueName: \"kubernetes.io/projected/c10657ca-8c59-4e17-b108-f8f2048a99d9-kube-api-access-4jvz9\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.644470 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.644520 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10657ca-8c59-4e17-b108-f8f2048a99d9-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.644538 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10657ca-8c59-4e17-b108-f8f2048a99d9-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.644577 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.644594 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c10657ca-8c59-4e17-b108-f8f2048a99d9-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.657366 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.669811 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.746620 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.746956 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.870062 5030 generic.go:334] "Generic (PLEG): container finished" podID="c10657ca-8c59-4e17-b108-f8f2048a99d9" containerID="bd2fe84670546d22c6e79679abae47595eb1cb07939c9d6a3c45b7128d5eca7c" exitCode=0 Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.870116 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"c10657ca-8c59-4e17-b108-f8f2048a99d9","Type":"ContainerDied","Data":"bd2fe84670546d22c6e79679abae47595eb1cb07939c9d6a3c45b7128d5eca7c"} Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.870125 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.870153 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"c10657ca-8c59-4e17-b108-f8f2048a99d9","Type":"ContainerDied","Data":"e270e5aa90c1e001f2483758dfcfbd64b11115aa1b87e7fc02c1c2c250fca6c4"} Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.870175 5030 scope.go:117] "RemoveContainer" containerID="bd2fe84670546d22c6e79679abae47595eb1cb07939c9d6a3c45b7128d5eca7c" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.896325 5030 scope.go:117] "RemoveContainer" containerID="10737de7b5a9f6867fd66553829c03602161d3659b025a652710b9b236465fd8" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.904156 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.911588 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.920585 5030 scope.go:117] "RemoveContainer" containerID="bd2fe84670546d22c6e79679abae47595eb1cb07939c9d6a3c45b7128d5eca7c" Nov 28 12:17:51 crc kubenswrapper[5030]: E1128 12:17:51.921145 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd2fe84670546d22c6e79679abae47595eb1cb07939c9d6a3c45b7128d5eca7c\": container with ID starting with bd2fe84670546d22c6e79679abae47595eb1cb07939c9d6a3c45b7128d5eca7c not found: ID does not exist" containerID="bd2fe84670546d22c6e79679abae47595eb1cb07939c9d6a3c45b7128d5eca7c" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.921238 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd2fe84670546d22c6e79679abae47595eb1cb07939c9d6a3c45b7128d5eca7c"} err="failed to get container status \"bd2fe84670546d22c6e79679abae47595eb1cb07939c9d6a3c45b7128d5eca7c\": rpc error: code = NotFound desc = could not find container \"bd2fe84670546d22c6e79679abae47595eb1cb07939c9d6a3c45b7128d5eca7c\": container with ID starting with bd2fe84670546d22c6e79679abae47595eb1cb07939c9d6a3c45b7128d5eca7c not found: ID does not exist" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.921347 5030 scope.go:117] "RemoveContainer" containerID="10737de7b5a9f6867fd66553829c03602161d3659b025a652710b9b236465fd8" Nov 28 12:17:51 crc kubenswrapper[5030]: E1128 12:17:51.921888 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10737de7b5a9f6867fd66553829c03602161d3659b025a652710b9b236465fd8\": container with ID starting with 10737de7b5a9f6867fd66553829c03602161d3659b025a652710b9b236465fd8 not found: ID does not exist" containerID="10737de7b5a9f6867fd66553829c03602161d3659b025a652710b9b236465fd8" Nov 28 12:17:51 crc kubenswrapper[5030]: I1128 12:17:51.921948 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10737de7b5a9f6867fd66553829c03602161d3659b025a652710b9b236465fd8"} err="failed to get container status \"10737de7b5a9f6867fd66553829c03602161d3659b025a652710b9b236465fd8\": rpc error: code = NotFound desc = could not find container \"10737de7b5a9f6867fd66553829c03602161d3659b025a652710b9b236465fd8\": container with ID starting with 10737de7b5a9f6867fd66553829c03602161d3659b025a652710b9b236465fd8 not found: ID does not exist" Nov 28 12:17:52 crc kubenswrapper[5030]: I1128 12:17:52.406030 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c10657ca-8c59-4e17-b108-f8f2048a99d9" path="/var/lib/kubelet/pods/c10657ca-8c59-4e17-b108-f8f2048a99d9/volumes" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.214841 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-jlpsh"] Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.231030 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-jlpsh"] Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.238035 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glanced5db-account-delete-drjsx"] Nov 28 12:17:53 crc kubenswrapper[5030]: E1128 12:17:53.238398 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de31346-dab3-413f-90b4-1279c3e28bab" containerName="glance-httpd" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.238418 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de31346-dab3-413f-90b4-1279c3e28bab" containerName="glance-httpd" Nov 28 12:17:53 crc kubenswrapper[5030]: E1128 12:17:53.238431 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81c1836-8212-4dbb-a029-675702077e93" containerName="glance-httpd" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.238439 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81c1836-8212-4dbb-a029-675702077e93" containerName="glance-httpd" Nov 28 12:17:53 crc kubenswrapper[5030]: E1128 12:17:53.238450 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10657ca-8c59-4e17-b108-f8f2048a99d9" containerName="glance-log" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.238457 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10657ca-8c59-4e17-b108-f8f2048a99d9" containerName="glance-log" Nov 28 12:17:53 crc kubenswrapper[5030]: E1128 12:17:53.238495 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de31346-dab3-413f-90b4-1279c3e28bab" containerName="glance-log" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.238502 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de31346-dab3-413f-90b4-1279c3e28bab" containerName="glance-log" Nov 28 12:17:53 crc kubenswrapper[5030]: E1128 12:17:53.238515 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81c1836-8212-4dbb-a029-675702077e93" containerName="glance-log" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.238521 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81c1836-8212-4dbb-a029-675702077e93" containerName="glance-log" Nov 28 12:17:53 crc kubenswrapper[5030]: E1128 12:17:53.238528 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10657ca-8c59-4e17-b108-f8f2048a99d9" containerName="glance-httpd" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.238534 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10657ca-8c59-4e17-b108-f8f2048a99d9" containerName="glance-httpd" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.238664 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de31346-dab3-413f-90b4-1279c3e28bab" containerName="glance-httpd" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.238682 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81c1836-8212-4dbb-a029-675702077e93" containerName="glance-httpd" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.238690 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81c1836-8212-4dbb-a029-675702077e93" containerName="glance-log" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.238698 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10657ca-8c59-4e17-b108-f8f2048a99d9" containerName="glance-httpd" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.238706 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10657ca-8c59-4e17-b108-f8f2048a99d9" containerName="glance-log" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.238716 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de31346-dab3-413f-90b4-1279c3e28bab" containerName="glance-log" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.239313 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glanced5db-account-delete-drjsx" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.250558 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glanced5db-account-delete-drjsx"] Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.340850 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e62b875-4653-4ed8-b929-c946fda041c5-operator-scripts\") pod \"glanced5db-account-delete-drjsx\" (UID: \"6e62b875-4653-4ed8-b929-c946fda041c5\") " pod="glance-kuttl-tests/glanced5db-account-delete-drjsx" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.340950 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jfrj\" (UniqueName: \"kubernetes.io/projected/6e62b875-4653-4ed8-b929-c946fda041c5-kube-api-access-4jfrj\") pod \"glanced5db-account-delete-drjsx\" (UID: \"6e62b875-4653-4ed8-b929-c946fda041c5\") " pod="glance-kuttl-tests/glanced5db-account-delete-drjsx" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.441985 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jfrj\" (UniqueName: \"kubernetes.io/projected/6e62b875-4653-4ed8-b929-c946fda041c5-kube-api-access-4jfrj\") pod \"glanced5db-account-delete-drjsx\" (UID: \"6e62b875-4653-4ed8-b929-c946fda041c5\") " pod="glance-kuttl-tests/glanced5db-account-delete-drjsx" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.443053 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e62b875-4653-4ed8-b929-c946fda041c5-operator-scripts\") pod \"glanced5db-account-delete-drjsx\" (UID: \"6e62b875-4653-4ed8-b929-c946fda041c5\") " pod="glance-kuttl-tests/glanced5db-account-delete-drjsx" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.444084 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e62b875-4653-4ed8-b929-c946fda041c5-operator-scripts\") pod \"glanced5db-account-delete-drjsx\" (UID: \"6e62b875-4653-4ed8-b929-c946fda041c5\") " pod="glance-kuttl-tests/glanced5db-account-delete-drjsx" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.464070 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jfrj\" (UniqueName: \"kubernetes.io/projected/6e62b875-4653-4ed8-b929-c946fda041c5-kube-api-access-4jfrj\") pod \"glanced5db-account-delete-drjsx\" (UID: \"6e62b875-4653-4ed8-b929-c946fda041c5\") " pod="glance-kuttl-tests/glanced5db-account-delete-drjsx" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.556901 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glanced5db-account-delete-drjsx" Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.796829 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glanced5db-account-delete-drjsx"] Nov 28 12:17:53 crc kubenswrapper[5030]: I1128 12:17:53.891981 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glanced5db-account-delete-drjsx" event={"ID":"6e62b875-4653-4ed8-b929-c946fda041c5","Type":"ContainerStarted","Data":"0e9f8117778f0d2bf98a46a42353791a098974d6b9e1370f9eac68c29fdc16e9"} Nov 28 12:17:54 crc kubenswrapper[5030]: I1128 12:17:54.407024 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b26196be-7779-4fd0-9671-972b80e3d673" path="/var/lib/kubelet/pods/b26196be-7779-4fd0-9671-972b80e3d673/volumes" Nov 28 12:17:54 crc kubenswrapper[5030]: I1128 12:17:54.907680 5030 generic.go:334] "Generic (PLEG): container finished" podID="6e62b875-4653-4ed8-b929-c946fda041c5" containerID="6e1f779c21fd85d02bbfa8e1d38bda1b11f942d277c0f4998be3b75a904388cb" exitCode=0 Nov 28 12:17:54 crc kubenswrapper[5030]: I1128 12:17:54.907993 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glanced5db-account-delete-drjsx" event={"ID":"6e62b875-4653-4ed8-b929-c946fda041c5","Type":"ContainerDied","Data":"6e1f779c21fd85d02bbfa8e1d38bda1b11f942d277c0f4998be3b75a904388cb"} Nov 28 12:17:56 crc kubenswrapper[5030]: I1128 12:17:56.269328 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glanced5db-account-delete-drjsx" Nov 28 12:17:56 crc kubenswrapper[5030]: I1128 12:17:56.316135 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e62b875-4653-4ed8-b929-c946fda041c5-operator-scripts\") pod \"6e62b875-4653-4ed8-b929-c946fda041c5\" (UID: \"6e62b875-4653-4ed8-b929-c946fda041c5\") " Nov 28 12:17:56 crc kubenswrapper[5030]: I1128 12:17:56.316258 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jfrj\" (UniqueName: \"kubernetes.io/projected/6e62b875-4653-4ed8-b929-c946fda041c5-kube-api-access-4jfrj\") pod \"6e62b875-4653-4ed8-b929-c946fda041c5\" (UID: \"6e62b875-4653-4ed8-b929-c946fda041c5\") " Nov 28 12:17:56 crc kubenswrapper[5030]: I1128 12:17:56.318216 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e62b875-4653-4ed8-b929-c946fda041c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6e62b875-4653-4ed8-b929-c946fda041c5" (UID: "6e62b875-4653-4ed8-b929-c946fda041c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:17:56 crc kubenswrapper[5030]: I1128 12:17:56.327283 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e62b875-4653-4ed8-b929-c946fda041c5-kube-api-access-4jfrj" (OuterVolumeSpecName: "kube-api-access-4jfrj") pod "6e62b875-4653-4ed8-b929-c946fda041c5" (UID: "6e62b875-4653-4ed8-b929-c946fda041c5"). InnerVolumeSpecName "kube-api-access-4jfrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:17:56 crc kubenswrapper[5030]: I1128 12:17:56.418857 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jfrj\" (UniqueName: \"kubernetes.io/projected/6e62b875-4653-4ed8-b929-c946fda041c5-kube-api-access-4jfrj\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:56 crc kubenswrapper[5030]: I1128 12:17:56.418903 5030 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e62b875-4653-4ed8-b929-c946fda041c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:17:56 crc kubenswrapper[5030]: I1128 12:17:56.930255 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glanced5db-account-delete-drjsx" event={"ID":"6e62b875-4653-4ed8-b929-c946fda041c5","Type":"ContainerDied","Data":"0e9f8117778f0d2bf98a46a42353791a098974d6b9e1370f9eac68c29fdc16e9"} Nov 28 12:17:56 crc kubenswrapper[5030]: I1128 12:17:56.930791 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e9f8117778f0d2bf98a46a42353791a098974d6b9e1370f9eac68c29fdc16e9" Nov 28 12:17:56 crc kubenswrapper[5030]: I1128 12:17:56.930338 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glanced5db-account-delete-drjsx" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.300102 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-ctb5m"] Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.307648 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-ctb5m"] Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.314722 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glanced5db-account-delete-drjsx"] Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.323724 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glanced5db-account-delete-drjsx"] Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.328337 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-d5db-account-create-update-sf5ck"] Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.334048 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-d5db-account-create-update-sf5ck"] Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.387029 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-bdjls"] Nov 28 12:17:58 crc kubenswrapper[5030]: E1128 12:17:58.387506 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e62b875-4653-4ed8-b929-c946fda041c5" containerName="mariadb-account-delete" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.387529 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e62b875-4653-4ed8-b929-c946fda041c5" containerName="mariadb-account-delete" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.387691 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e62b875-4653-4ed8-b929-c946fda041c5" containerName="mariadb-account-delete" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.388283 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-bdjls" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.419784 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e62b875-4653-4ed8-b929-c946fda041c5" path="/var/lib/kubelet/pods/6e62b875-4653-4ed8-b929-c946fda041c5/volumes" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.420296 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9d28ad5-aa34-41cc-8875-0b6395e4b205" path="/var/lib/kubelet/pods/c9d28ad5-aa34-41cc-8875-0b6395e4b205/volumes" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.420780 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d559fcf7-12fc-4984-8af3-65b7416c572c" path="/var/lib/kubelet/pods/d559fcf7-12fc-4984-8af3-65b7416c572c/volumes" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.421246 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-bdjls"] Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.459884 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b577ff1-dc09-42d5-95b4-41d690941740-operator-scripts\") pod \"glance-db-create-bdjls\" (UID: \"1b577ff1-dc09-42d5-95b4-41d690941740\") " pod="glance-kuttl-tests/glance-db-create-bdjls" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.459978 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7sdj\" (UniqueName: \"kubernetes.io/projected/1b577ff1-dc09-42d5-95b4-41d690941740-kube-api-access-x7sdj\") pod \"glance-db-create-bdjls\" (UID: \"1b577ff1-dc09-42d5-95b4-41d690941740\") " pod="glance-kuttl-tests/glance-db-create-bdjls" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.475087 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-cb75-account-create-update-69wqw"] Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.476740 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cb75-account-create-update-69wqw" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.479200 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.485854 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-cb75-account-create-update-69wqw"] Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.563212 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2731a8d0-e56b-4f9a-a63c-60e93134be84-operator-scripts\") pod \"glance-cb75-account-create-update-69wqw\" (UID: \"2731a8d0-e56b-4f9a-a63c-60e93134be84\") " pod="glance-kuttl-tests/glance-cb75-account-create-update-69wqw" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.563499 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rcw8\" (UniqueName: \"kubernetes.io/projected/2731a8d0-e56b-4f9a-a63c-60e93134be84-kube-api-access-8rcw8\") pod \"glance-cb75-account-create-update-69wqw\" (UID: \"2731a8d0-e56b-4f9a-a63c-60e93134be84\") " pod="glance-kuttl-tests/glance-cb75-account-create-update-69wqw" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.563583 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b577ff1-dc09-42d5-95b4-41d690941740-operator-scripts\") pod \"glance-db-create-bdjls\" (UID: \"1b577ff1-dc09-42d5-95b4-41d690941740\") " pod="glance-kuttl-tests/glance-db-create-bdjls" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.563651 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7sdj\" (UniqueName: \"kubernetes.io/projected/1b577ff1-dc09-42d5-95b4-41d690941740-kube-api-access-x7sdj\") pod \"glance-db-create-bdjls\" (UID: \"1b577ff1-dc09-42d5-95b4-41d690941740\") " pod="glance-kuttl-tests/glance-db-create-bdjls" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.565570 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b577ff1-dc09-42d5-95b4-41d690941740-operator-scripts\") pod \"glance-db-create-bdjls\" (UID: \"1b577ff1-dc09-42d5-95b4-41d690941740\") " pod="glance-kuttl-tests/glance-db-create-bdjls" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.582153 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7sdj\" (UniqueName: \"kubernetes.io/projected/1b577ff1-dc09-42d5-95b4-41d690941740-kube-api-access-x7sdj\") pod \"glance-db-create-bdjls\" (UID: \"1b577ff1-dc09-42d5-95b4-41d690941740\") " pod="glance-kuttl-tests/glance-db-create-bdjls" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.665318 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rcw8\" (UniqueName: \"kubernetes.io/projected/2731a8d0-e56b-4f9a-a63c-60e93134be84-kube-api-access-8rcw8\") pod \"glance-cb75-account-create-update-69wqw\" (UID: \"2731a8d0-e56b-4f9a-a63c-60e93134be84\") " pod="glance-kuttl-tests/glance-cb75-account-create-update-69wqw" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.665457 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2731a8d0-e56b-4f9a-a63c-60e93134be84-operator-scripts\") pod \"glance-cb75-account-create-update-69wqw\" (UID: \"2731a8d0-e56b-4f9a-a63c-60e93134be84\") " pod="glance-kuttl-tests/glance-cb75-account-create-update-69wqw" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.666707 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2731a8d0-e56b-4f9a-a63c-60e93134be84-operator-scripts\") pod \"glance-cb75-account-create-update-69wqw\" (UID: \"2731a8d0-e56b-4f9a-a63c-60e93134be84\") " pod="glance-kuttl-tests/glance-cb75-account-create-update-69wqw" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.693246 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rcw8\" (UniqueName: \"kubernetes.io/projected/2731a8d0-e56b-4f9a-a63c-60e93134be84-kube-api-access-8rcw8\") pod \"glance-cb75-account-create-update-69wqw\" (UID: \"2731a8d0-e56b-4f9a-a63c-60e93134be84\") " pod="glance-kuttl-tests/glance-cb75-account-create-update-69wqw" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.720176 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-bdjls" Nov 28 12:17:58 crc kubenswrapper[5030]: I1128 12:17:58.798076 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cb75-account-create-update-69wqw" Nov 28 12:17:59 crc kubenswrapper[5030]: I1128 12:17:59.195801 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-bdjls"] Nov 28 12:17:59 crc kubenswrapper[5030]: I1128 12:17:59.295278 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-cb75-account-create-update-69wqw"] Nov 28 12:17:59 crc kubenswrapper[5030]: W1128 12:17:59.300245 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2731a8d0_e56b_4f9a_a63c_60e93134be84.slice/crio-319f12fd32fb993d04cc14329c897f3f81c79c90dcbb0f04db2e2b661282d883 WatchSource:0}: Error finding container 319f12fd32fb993d04cc14329c897f3f81c79c90dcbb0f04db2e2b661282d883: Status 404 returned error can't find the container with id 319f12fd32fb993d04cc14329c897f3f81c79c90dcbb0f04db2e2b661282d883 Nov 28 12:17:59 crc kubenswrapper[5030]: I1128 12:17:59.971686 5030 generic.go:334] "Generic (PLEG): container finished" podID="2731a8d0-e56b-4f9a-a63c-60e93134be84" containerID="2d305980c505e7d29e7230dcb196f99b0da6ef9b887dc4e8204e9f78eb1b86c8" exitCode=0 Nov 28 12:17:59 crc kubenswrapper[5030]: I1128 12:17:59.973743 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-cb75-account-create-update-69wqw" event={"ID":"2731a8d0-e56b-4f9a-a63c-60e93134be84","Type":"ContainerDied","Data":"2d305980c505e7d29e7230dcb196f99b0da6ef9b887dc4e8204e9f78eb1b86c8"} Nov 28 12:17:59 crc kubenswrapper[5030]: I1128 12:17:59.973842 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-cb75-account-create-update-69wqw" event={"ID":"2731a8d0-e56b-4f9a-a63c-60e93134be84","Type":"ContainerStarted","Data":"319f12fd32fb993d04cc14329c897f3f81c79c90dcbb0f04db2e2b661282d883"} Nov 28 12:17:59 crc kubenswrapper[5030]: I1128 12:17:59.974803 5030 generic.go:334] "Generic (PLEG): container finished" podID="1b577ff1-dc09-42d5-95b4-41d690941740" containerID="b11a1f31aa8fc72c072c5cdf734fd2d71d4da4901ceafcc49a99a0b105bd63f2" exitCode=0 Nov 28 12:17:59 crc kubenswrapper[5030]: I1128 12:17:59.974846 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-bdjls" event={"ID":"1b577ff1-dc09-42d5-95b4-41d690941740","Type":"ContainerDied","Data":"b11a1f31aa8fc72c072c5cdf734fd2d71d4da4901ceafcc49a99a0b105bd63f2"} Nov 28 12:17:59 crc kubenswrapper[5030]: I1128 12:17:59.974894 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-bdjls" event={"ID":"1b577ff1-dc09-42d5-95b4-41d690941740","Type":"ContainerStarted","Data":"c38972930b74f296bd34c17fe1da86395ab253a836f2e5c684264055bd5894a2"} Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.355667 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cb75-account-create-update-69wqw" Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.360770 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-bdjls" Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.420670 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b577ff1-dc09-42d5-95b4-41d690941740-operator-scripts\") pod \"1b577ff1-dc09-42d5-95b4-41d690941740\" (UID: \"1b577ff1-dc09-42d5-95b4-41d690941740\") " Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.420755 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2731a8d0-e56b-4f9a-a63c-60e93134be84-operator-scripts\") pod \"2731a8d0-e56b-4f9a-a63c-60e93134be84\" (UID: \"2731a8d0-e56b-4f9a-a63c-60e93134be84\") " Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.420874 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rcw8\" (UniqueName: \"kubernetes.io/projected/2731a8d0-e56b-4f9a-a63c-60e93134be84-kube-api-access-8rcw8\") pod \"2731a8d0-e56b-4f9a-a63c-60e93134be84\" (UID: \"2731a8d0-e56b-4f9a-a63c-60e93134be84\") " Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.420928 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7sdj\" (UniqueName: \"kubernetes.io/projected/1b577ff1-dc09-42d5-95b4-41d690941740-kube-api-access-x7sdj\") pod \"1b577ff1-dc09-42d5-95b4-41d690941740\" (UID: \"1b577ff1-dc09-42d5-95b4-41d690941740\") " Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.421800 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2731a8d0-e56b-4f9a-a63c-60e93134be84-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2731a8d0-e56b-4f9a-a63c-60e93134be84" (UID: "2731a8d0-e56b-4f9a-a63c-60e93134be84"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.422661 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b577ff1-dc09-42d5-95b4-41d690941740-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b577ff1-dc09-42d5-95b4-41d690941740" (UID: "1b577ff1-dc09-42d5-95b4-41d690941740"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.427942 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b577ff1-dc09-42d5-95b4-41d690941740-kube-api-access-x7sdj" (OuterVolumeSpecName: "kube-api-access-x7sdj") pod "1b577ff1-dc09-42d5-95b4-41d690941740" (UID: "1b577ff1-dc09-42d5-95b4-41d690941740"). InnerVolumeSpecName "kube-api-access-x7sdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.430594 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2731a8d0-e56b-4f9a-a63c-60e93134be84-kube-api-access-8rcw8" (OuterVolumeSpecName: "kube-api-access-8rcw8") pod "2731a8d0-e56b-4f9a-a63c-60e93134be84" (UID: "2731a8d0-e56b-4f9a-a63c-60e93134be84"). InnerVolumeSpecName "kube-api-access-8rcw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.523297 5030 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b577ff1-dc09-42d5-95b4-41d690941740-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.523361 5030 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2731a8d0-e56b-4f9a-a63c-60e93134be84-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.523376 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rcw8\" (UniqueName: \"kubernetes.io/projected/2731a8d0-e56b-4f9a-a63c-60e93134be84-kube-api-access-8rcw8\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.523393 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7sdj\" (UniqueName: \"kubernetes.io/projected/1b577ff1-dc09-42d5-95b4-41d690941740-kube-api-access-x7sdj\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.993799 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cb75-account-create-update-69wqw" Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.993854 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-cb75-account-create-update-69wqw" event={"ID":"2731a8d0-e56b-4f9a-a63c-60e93134be84","Type":"ContainerDied","Data":"319f12fd32fb993d04cc14329c897f3f81c79c90dcbb0f04db2e2b661282d883"} Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.993916 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="319f12fd32fb993d04cc14329c897f3f81c79c90dcbb0f04db2e2b661282d883" Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.995855 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-bdjls" event={"ID":"1b577ff1-dc09-42d5-95b4-41d690941740","Type":"ContainerDied","Data":"c38972930b74f296bd34c17fe1da86395ab253a836f2e5c684264055bd5894a2"} Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.995932 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c38972930b74f296bd34c17fe1da86395ab253a836f2e5c684264055bd5894a2" Nov 28 12:18:01 crc kubenswrapper[5030]: I1128 12:18:01.995941 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-bdjls" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.201980 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.202727 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.202820 5030 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.204069 5030 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c"} pod="openshift-machine-config-operator/machine-config-daemon-cqr62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.204167 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" containerID="cri-o://8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" gracePeriod=600 Nov 28 12:18:03 crc kubenswrapper[5030]: E1128 12:18:03.342957 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.708684 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-74wtq"] Nov 28 12:18:03 crc kubenswrapper[5030]: E1128 12:18:03.709038 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b577ff1-dc09-42d5-95b4-41d690941740" containerName="mariadb-database-create" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.709061 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b577ff1-dc09-42d5-95b4-41d690941740" containerName="mariadb-database-create" Nov 28 12:18:03 crc kubenswrapper[5030]: E1128 12:18:03.709083 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2731a8d0-e56b-4f9a-a63c-60e93134be84" containerName="mariadb-account-create-update" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.709092 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="2731a8d0-e56b-4f9a-a63c-60e93134be84" containerName="mariadb-account-create-update" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.709284 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b577ff1-dc09-42d5-95b4-41d690941740" containerName="mariadb-database-create" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.709313 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="2731a8d0-e56b-4f9a-a63c-60e93134be84" containerName="mariadb-account-create-update" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.710162 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-74wtq" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.715247 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.715377 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-hsvqc" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.722402 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-74wtq"] Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.769708 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512c51fc-55b2-4858-9d28-991826eafff1-config-data\") pod \"glance-db-sync-74wtq\" (UID: \"512c51fc-55b2-4858-9d28-991826eafff1\") " pod="glance-kuttl-tests/glance-db-sync-74wtq" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.769804 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/512c51fc-55b2-4858-9d28-991826eafff1-db-sync-config-data\") pod \"glance-db-sync-74wtq\" (UID: \"512c51fc-55b2-4858-9d28-991826eafff1\") " pod="glance-kuttl-tests/glance-db-sync-74wtq" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.769848 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8x5p\" (UniqueName: \"kubernetes.io/projected/512c51fc-55b2-4858-9d28-991826eafff1-kube-api-access-v8x5p\") pod \"glance-db-sync-74wtq\" (UID: \"512c51fc-55b2-4858-9d28-991826eafff1\") " pod="glance-kuttl-tests/glance-db-sync-74wtq" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.871761 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512c51fc-55b2-4858-9d28-991826eafff1-config-data\") pod \"glance-db-sync-74wtq\" (UID: \"512c51fc-55b2-4858-9d28-991826eafff1\") " pod="glance-kuttl-tests/glance-db-sync-74wtq" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.871840 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/512c51fc-55b2-4858-9d28-991826eafff1-db-sync-config-data\") pod \"glance-db-sync-74wtq\" (UID: \"512c51fc-55b2-4858-9d28-991826eafff1\") " pod="glance-kuttl-tests/glance-db-sync-74wtq" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.871872 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8x5p\" (UniqueName: \"kubernetes.io/projected/512c51fc-55b2-4858-9d28-991826eafff1-kube-api-access-v8x5p\") pod \"glance-db-sync-74wtq\" (UID: \"512c51fc-55b2-4858-9d28-991826eafff1\") " pod="glance-kuttl-tests/glance-db-sync-74wtq" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.895684 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/512c51fc-55b2-4858-9d28-991826eafff1-db-sync-config-data\") pod \"glance-db-sync-74wtq\" (UID: \"512c51fc-55b2-4858-9d28-991826eafff1\") " pod="glance-kuttl-tests/glance-db-sync-74wtq" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.895794 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512c51fc-55b2-4858-9d28-991826eafff1-config-data\") pod \"glance-db-sync-74wtq\" (UID: \"512c51fc-55b2-4858-9d28-991826eafff1\") " pod="glance-kuttl-tests/glance-db-sync-74wtq" Nov 28 12:18:03 crc kubenswrapper[5030]: I1128 12:18:03.897966 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8x5p\" (UniqueName: \"kubernetes.io/projected/512c51fc-55b2-4858-9d28-991826eafff1-kube-api-access-v8x5p\") pod \"glance-db-sync-74wtq\" (UID: \"512c51fc-55b2-4858-9d28-991826eafff1\") " pod="glance-kuttl-tests/glance-db-sync-74wtq" Nov 28 12:18:04 crc kubenswrapper[5030]: I1128 12:18:04.022215 5030 generic.go:334] "Generic (PLEG): container finished" podID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" exitCode=0 Nov 28 12:18:04 crc kubenswrapper[5030]: I1128 12:18:04.022289 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerDied","Data":"8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c"} Nov 28 12:18:04 crc kubenswrapper[5030]: I1128 12:18:04.022355 5030 scope.go:117] "RemoveContainer" containerID="a7058c9055a9b9f831de3e82c6637d0fddb246f761f212b4d9db9f0e85aa948a" Nov 28 12:18:04 crc kubenswrapper[5030]: I1128 12:18:04.023221 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:18:04 crc kubenswrapper[5030]: E1128 12:18:04.023692 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:18:04 crc kubenswrapper[5030]: I1128 12:18:04.032306 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-74wtq" Nov 28 12:18:04 crc kubenswrapper[5030]: I1128 12:18:04.538247 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-74wtq"] Nov 28 12:18:05 crc kubenswrapper[5030]: I1128 12:18:05.034267 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-74wtq" event={"ID":"512c51fc-55b2-4858-9d28-991826eafff1","Type":"ContainerStarted","Data":"54bab5661bdc1a618503347b6253dbb7a7b04681a19132d5eb805988218b7748"} Nov 28 12:18:06 crc kubenswrapper[5030]: I1128 12:18:06.053765 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-74wtq" event={"ID":"512c51fc-55b2-4858-9d28-991826eafff1","Type":"ContainerStarted","Data":"10330f82554a6e0e3a5473a0a2b0a1fbe5ee2c77c286a56650d98ae627554083"} Nov 28 12:18:08 crc kubenswrapper[5030]: I1128 12:18:08.076264 5030 generic.go:334] "Generic (PLEG): container finished" podID="512c51fc-55b2-4858-9d28-991826eafff1" containerID="10330f82554a6e0e3a5473a0a2b0a1fbe5ee2c77c286a56650d98ae627554083" exitCode=0 Nov 28 12:18:08 crc kubenswrapper[5030]: I1128 12:18:08.076322 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-74wtq" event={"ID":"512c51fc-55b2-4858-9d28-991826eafff1","Type":"ContainerDied","Data":"10330f82554a6e0e3a5473a0a2b0a1fbe5ee2c77c286a56650d98ae627554083"} Nov 28 12:18:09 crc kubenswrapper[5030]: I1128 12:18:09.433762 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-74wtq" Nov 28 12:18:09 crc kubenswrapper[5030]: I1128 12:18:09.494849 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/512c51fc-55b2-4858-9d28-991826eafff1-db-sync-config-data\") pod \"512c51fc-55b2-4858-9d28-991826eafff1\" (UID: \"512c51fc-55b2-4858-9d28-991826eafff1\") " Nov 28 12:18:09 crc kubenswrapper[5030]: I1128 12:18:09.494996 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512c51fc-55b2-4858-9d28-991826eafff1-config-data\") pod \"512c51fc-55b2-4858-9d28-991826eafff1\" (UID: \"512c51fc-55b2-4858-9d28-991826eafff1\") " Nov 28 12:18:09 crc kubenswrapper[5030]: I1128 12:18:09.495029 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8x5p\" (UniqueName: \"kubernetes.io/projected/512c51fc-55b2-4858-9d28-991826eafff1-kube-api-access-v8x5p\") pod \"512c51fc-55b2-4858-9d28-991826eafff1\" (UID: \"512c51fc-55b2-4858-9d28-991826eafff1\") " Nov 28 12:18:09 crc kubenswrapper[5030]: I1128 12:18:09.504686 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/512c51fc-55b2-4858-9d28-991826eafff1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "512c51fc-55b2-4858-9d28-991826eafff1" (UID: "512c51fc-55b2-4858-9d28-991826eafff1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:18:09 crc kubenswrapper[5030]: I1128 12:18:09.504721 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/512c51fc-55b2-4858-9d28-991826eafff1-kube-api-access-v8x5p" (OuterVolumeSpecName: "kube-api-access-v8x5p") pod "512c51fc-55b2-4858-9d28-991826eafff1" (UID: "512c51fc-55b2-4858-9d28-991826eafff1"). InnerVolumeSpecName "kube-api-access-v8x5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:18:09 crc kubenswrapper[5030]: I1128 12:18:09.539059 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/512c51fc-55b2-4858-9d28-991826eafff1-config-data" (OuterVolumeSpecName: "config-data") pod "512c51fc-55b2-4858-9d28-991826eafff1" (UID: "512c51fc-55b2-4858-9d28-991826eafff1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:18:09 crc kubenswrapper[5030]: I1128 12:18:09.596594 5030 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/512c51fc-55b2-4858-9d28-991826eafff1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:09 crc kubenswrapper[5030]: I1128 12:18:09.596630 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512c51fc-55b2-4858-9d28-991826eafff1-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:09 crc kubenswrapper[5030]: I1128 12:18:09.596643 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8x5p\" (UniqueName: \"kubernetes.io/projected/512c51fc-55b2-4858-9d28-991826eafff1-kube-api-access-v8x5p\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:10 crc kubenswrapper[5030]: I1128 12:18:10.097250 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-74wtq" event={"ID":"512c51fc-55b2-4858-9d28-991826eafff1","Type":"ContainerDied","Data":"54bab5661bdc1a618503347b6253dbb7a7b04681a19132d5eb805988218b7748"} Nov 28 12:18:10 crc kubenswrapper[5030]: I1128 12:18:10.097592 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54bab5661bdc1a618503347b6253dbb7a7b04681a19132d5eb805988218b7748" Nov 28 12:18:10 crc kubenswrapper[5030]: I1128 12:18:10.097452 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-74wtq" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.524074 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 28 12:18:11 crc kubenswrapper[5030]: E1128 12:18:11.524380 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512c51fc-55b2-4858-9d28-991826eafff1" containerName="glance-db-sync" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.524394 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="512c51fc-55b2-4858-9d28-991826eafff1" containerName="glance-db-sync" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.524543 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="512c51fc-55b2-4858-9d28-991826eafff1" containerName="glance-db-sync" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.525327 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.529115 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-external-config-data" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.531868 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-hsvqc" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.531898 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.547528 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.628095 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.628168 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.628234 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-sys\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.628269 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-dev\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.628299 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.628328 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.628361 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hks8g\" (UniqueName: \"kubernetes.io/projected/a4c4907d-c06e-490b-a02d-49dfc45e62b0-kube-api-access-hks8g\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.628394 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4c4907d-c06e-490b-a02d-49dfc45e62b0-logs\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.628437 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c4907d-c06e-490b-a02d-49dfc45e62b0-config-data\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.628570 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.628734 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-run\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.628809 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.628863 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4c4907d-c06e-490b-a02d-49dfc45e62b0-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.629052 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4c4907d-c06e-490b-a02d-49dfc45e62b0-scripts\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.730659 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4c4907d-c06e-490b-a02d-49dfc45e62b0-scripts\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.730764 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.730825 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.730883 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-sys\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.730923 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-dev\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.730947 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.731034 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.730965 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.731107 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-sys\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.731065 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-dev\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.731189 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.731313 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hks8g\" (UniqueName: \"kubernetes.io/projected/a4c4907d-c06e-490b-a02d-49dfc45e62b0-kube-api-access-hks8g\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.731397 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4c4907d-c06e-490b-a02d-49dfc45e62b0-logs\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.731445 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c4907d-c06e-490b-a02d-49dfc45e62b0-config-data\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.731551 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") device mount path \"/mnt/openstack/pv18\"" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.731596 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.731651 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-run\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.731709 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.731770 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4c4907d-c06e-490b-a02d-49dfc45e62b0-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.732082 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.732248 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-run\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.743937 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4c4907d-c06e-490b-a02d-49dfc45e62b0-logs\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.744068 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.744301 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4c4907d-c06e-490b-a02d-49dfc45e62b0-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.745175 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4c4907d-c06e-490b-a02d-49dfc45e62b0-scripts\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.731543 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") device mount path \"/mnt/openstack/pv08\"" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.750047 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c4907d-c06e-490b-a02d-49dfc45e62b0-config-data\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.772627 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hks8g\" (UniqueName: \"kubernetes.io/projected/a4c4907d-c06e-490b-a02d-49dfc45e62b0-kube-api-access-hks8g\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.801380 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.807695 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.810960 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.815519 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.836452 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-1\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.843065 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.938654 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c83086a2-0b01-46d3-9eca-a78e189901e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.938752 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83086a2-0b01-46d3-9eca-a78e189901e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.938778 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.938804 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmmmz\" (UniqueName: \"kubernetes.io/projected/c83086a2-0b01-46d3-9eca-a78e189901e4-kube-api-access-qmmmz\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.938828 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.938875 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c83086a2-0b01-46d3-9eca-a78e189901e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.938902 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.938929 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-run\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.938962 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.938995 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c83086a2-0b01-46d3-9eca-a78e189901e4-logs\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.939018 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.939042 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.939071 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-dev\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:11 crc kubenswrapper[5030]: I1128 12:18:11.939109 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-sys\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.040988 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-sys\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.041091 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c83086a2-0b01-46d3-9eca-a78e189901e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.041144 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83086a2-0b01-46d3-9eca-a78e189901e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.041169 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.041193 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.041188 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-sys\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.041216 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmmmz\" (UniqueName: \"kubernetes.io/projected/c83086a2-0b01-46d3-9eca-a78e189901e4-kube-api-access-qmmmz\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.041338 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c83086a2-0b01-46d3-9eca-a78e189901e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.041369 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.041400 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-run\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.041462 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.041532 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c83086a2-0b01-46d3-9eca-a78e189901e4-logs\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.041552 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.041569 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.041610 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-dev\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.041736 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-dev\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.042334 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c83086a2-0b01-46d3-9eca-a78e189901e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.042384 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.042442 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-run\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.042479 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.042564 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.042698 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") device mount path \"/mnt/openstack/pv13\"" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.042858 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") device mount path \"/mnt/openstack/pv12\"" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.044634 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c83086a2-0b01-46d3-9eca-a78e189901e4-logs\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.044676 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.048021 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83086a2-0b01-46d3-9eca-a78e189901e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.048806 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c83086a2-0b01-46d3-9eca-a78e189901e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.059344 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmmmz\" (UniqueName: \"kubernetes.io/projected/c83086a2-0b01-46d3-9eca-a78e189901e4-kube-api-access-qmmmz\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.072118 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.074380 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-0\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.113921 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 28 12:18:12 crc kubenswrapper[5030]: W1128 12:18:12.119322 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4c4907d_c06e_490b_a02d_49dfc45e62b0.slice/crio-72dc4b19ffcefe007eda49308e5e11385769d839657582c40c19563f4cfebc9c WatchSource:0}: Error finding container 72dc4b19ffcefe007eda49308e5e11385769d839657582c40c19563f4cfebc9c: Status 404 returned error can't find the container with id 72dc4b19ffcefe007eda49308e5e11385769d839657582c40c19563f4cfebc9c Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.198436 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.200278 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.203108 5030 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.205324 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.219508 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.227095 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.228897 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.240988 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.347860 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.347911 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.347955 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.348090 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.348160 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.348192 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/168401c9-db52-49d2-8cf7-988b60e50065-scripts\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.348218 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5aef521-fefa-4878-a4b1-3524e7a9b262-logs\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.348239 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5aef521-fefa-4878-a4b1-3524e7a9b262-scripts\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.348258 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.348298 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.348422 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.348488 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9tk5\" (UniqueName: \"kubernetes.io/projected/168401c9-db52-49d2-8cf7-988b60e50065-kube-api-access-x9tk5\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.348527 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-run\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.348574 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.348603 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr22j\" (UniqueName: \"kubernetes.io/projected/b5aef521-fefa-4878-a4b1-3524e7a9b262-kube-api-access-hr22j\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.349111 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.349168 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b5aef521-fefa-4878-a4b1-3524e7a9b262-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.349201 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-sys\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.349240 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.349273 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-dev\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.349293 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5aef521-fefa-4878-a4b1-3524e7a9b262-config-data\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.349311 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/168401c9-db52-49d2-8cf7-988b60e50065-config-data\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.349335 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-dev\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.349358 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-run\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.349382 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/168401c9-db52-49d2-8cf7-988b60e50065-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.349402 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-sys\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.349508 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/168401c9-db52-49d2-8cf7-988b60e50065-logs\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.349583 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.451122 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b5aef521-fefa-4878-a4b1-3524e7a9b262-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.451537 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-sys\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.451563 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.451585 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-dev\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.451611 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/168401c9-db52-49d2-8cf7-988b60e50065-config-data\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.451628 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5aef521-fefa-4878-a4b1-3524e7a9b262-config-data\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.451643 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-dev\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.451661 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-run\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.452558 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/168401c9-db52-49d2-8cf7-988b60e50065-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.452597 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-sys\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.452634 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/168401c9-db52-49d2-8cf7-988b60e50065-logs\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.452670 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.452698 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.452722 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.452761 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.452827 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.451755 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b5aef521-fefa-4878-a4b1-3524e7a9b262-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.451772 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-sys\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.451799 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-dev\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.451822 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-dev\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.451840 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-run\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.452128 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") device mount path \"/mnt/openstack/pv10\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.453286 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/168401c9-db52-49d2-8cf7-988b60e50065-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.453311 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-sys\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.453530 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/168401c9-db52-49d2-8cf7-988b60e50065-logs\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.453553 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.453578 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.453611 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.453773 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.453845 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.453881 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/168401c9-db52-49d2-8cf7-988b60e50065-scripts\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.453902 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5aef521-fefa-4878-a4b1-3524e7a9b262-logs\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.453929 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5aef521-fefa-4878-a4b1-3524e7a9b262-scripts\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.453954 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.453996 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.454032 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.454062 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9tk5\" (UniqueName: \"kubernetes.io/projected/168401c9-db52-49d2-8cf7-988b60e50065-kube-api-access-x9tk5\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.454095 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-run\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.454114 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.454142 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr22j\" (UniqueName: \"kubernetes.io/projected/b5aef521-fefa-4878-a4b1-3524e7a9b262-kube-api-access-hr22j\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.454205 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.454352 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.454455 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.454521 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.454647 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.454655 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5aef521-fefa-4878-a4b1-3524e7a9b262-logs\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.454690 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.454728 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-run\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.454761 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.454853 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") device mount path \"/mnt/openstack/pv09\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.459102 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/168401c9-db52-49d2-8cf7-988b60e50065-config-data\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.460837 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5aef521-fefa-4878-a4b1-3524e7a9b262-config-data\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.463158 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/168401c9-db52-49d2-8cf7-988b60e50065-scripts\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.463759 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5aef521-fefa-4878-a4b1-3524e7a9b262-scripts\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.476893 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr22j\" (UniqueName: \"kubernetes.io/projected/b5aef521-fefa-4878-a4b1-3524e7a9b262-kube-api-access-hr22j\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.478235 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9tk5\" (UniqueName: \"kubernetes.io/projected/168401c9-db52-49d2-8cf7-988b60e50065-kube-api-access-x9tk5\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.484657 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.484756 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.497514 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: W1128 12:18:12.498760 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc83086a2_0b01_46d3_9eca_a78e189901e4.slice/crio-6eefbe405e49e67734343d853c3e1304f0afcd9b1244db9e5849369d4e28b7ae WatchSource:0}: Error finding container 6eefbe405e49e67734343d853c3e1304f0afcd9b1244db9e5849369d4e28b7ae: Status 404 returned error can't find the container with id 6eefbe405e49e67734343d853c3e1304f0afcd9b1244db9e5849369d4e28b7ae Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.500554 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-1\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.528835 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.558561 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.810229 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:18:12 crc kubenswrapper[5030]: I1128 12:18:12.832349 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:13 crc kubenswrapper[5030]: I1128 12:18:13.036640 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:18:13 crc kubenswrapper[5030]: W1128 12:18:13.048057 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5aef521_fefa_4878_a4b1_3524e7a9b262.slice/crio-910a76486ccf2ad040c1bae73a7bf4c7e1e52c73804f344d3dee41c30948163b WatchSource:0}: Error finding container 910a76486ccf2ad040c1bae73a7bf4c7e1e52c73804f344d3dee41c30948163b: Status 404 returned error can't find the container with id 910a76486ccf2ad040c1bae73a7bf4c7e1e52c73804f344d3dee41c30948163b Nov 28 12:18:13 crc kubenswrapper[5030]: I1128 12:18:13.132443 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"b5aef521-fefa-4878-a4b1-3524e7a9b262","Type":"ContainerStarted","Data":"910a76486ccf2ad040c1bae73a7bf4c7e1e52c73804f344d3dee41c30948163b"} Nov 28 12:18:13 crc kubenswrapper[5030]: I1128 12:18:13.134009 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"c83086a2-0b01-46d3-9eca-a78e189901e4","Type":"ContainerStarted","Data":"7171f2008f9fbb42654305c05c9baebc9ad3bd551062f97caf46fe9c74535332"} Nov 28 12:18:13 crc kubenswrapper[5030]: I1128 12:18:13.134035 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"c83086a2-0b01-46d3-9eca-a78e189901e4","Type":"ContainerStarted","Data":"af8dfc7abded04853a71ef168a756891b165c11f02057fce8da5f0823dca98ce"} Nov 28 12:18:13 crc kubenswrapper[5030]: I1128 12:18:13.134045 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"c83086a2-0b01-46d3-9eca-a78e189901e4","Type":"ContainerStarted","Data":"6eefbe405e49e67734343d853c3e1304f0afcd9b1244db9e5849369d4e28b7ae"} Nov 28 12:18:13 crc kubenswrapper[5030]: I1128 12:18:13.142734 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"a4c4907d-c06e-490b-a02d-49dfc45e62b0","Type":"ContainerStarted","Data":"0c617586d55514a404e785c6b89e04c5c6d8b6e65ef5462a6f09eaca2a5fc982"} Nov 28 12:18:13 crc kubenswrapper[5030]: I1128 12:18:13.142777 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"a4c4907d-c06e-490b-a02d-49dfc45e62b0","Type":"ContainerStarted","Data":"c8f2024c85deea6a69fc3f4ffa2f33f204ed9445295164cf6112961f4705f345"} Nov 28 12:18:13 crc kubenswrapper[5030]: I1128 12:18:13.142792 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"a4c4907d-c06e-490b-a02d-49dfc45e62b0","Type":"ContainerStarted","Data":"72dc4b19ffcefe007eda49308e5e11385769d839657582c40c19563f4cfebc9c"} Nov 28 12:18:13 crc kubenswrapper[5030]: I1128 12:18:13.159917 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-0" podStartSLOduration=3.159894978 podStartE2EDuration="3.159894978s" podCreationTimestamp="2025-11-28 12:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:18:13.158136231 +0000 UTC m=+1511.099878914" watchObservedRunningTime="2025-11-28 12:18:13.159894978 +0000 UTC m=+1511.101637661" Nov 28 12:18:13 crc kubenswrapper[5030]: I1128 12:18:13.190394 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-1" podStartSLOduration=2.190369954 podStartE2EDuration="2.190369954s" podCreationTimestamp="2025-11-28 12:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:18:13.179569261 +0000 UTC m=+1511.121311954" watchObservedRunningTime="2025-11-28 12:18:13.190369954 +0000 UTC m=+1511.132112637" Nov 28 12:18:13 crc kubenswrapper[5030]: I1128 12:18:13.282503 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:18:13 crc kubenswrapper[5030]: W1128 12:18:13.286981 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod168401c9_db52_49d2_8cf7_988b60e50065.slice/crio-86f9e2adaf8c302f791065aa8563163667787627339cea87f7335adf6eb84d3c WatchSource:0}: Error finding container 86f9e2adaf8c302f791065aa8563163667787627339cea87f7335adf6eb84d3c: Status 404 returned error can't find the container with id 86f9e2adaf8c302f791065aa8563163667787627339cea87f7335adf6eb84d3c Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.160889 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"168401c9-db52-49d2-8cf7-988b60e50065","Type":"ContainerStarted","Data":"d16766403a8893ecb48e0aecd7210763aebb949a4fe39115deb346a4c0595537"} Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.161533 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"168401c9-db52-49d2-8cf7-988b60e50065","Type":"ContainerStarted","Data":"6c6fefa2bc9dfa798ad84e135229af8f9305e1905115acdffc35215bd80b1eac"} Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.161553 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"168401c9-db52-49d2-8cf7-988b60e50065","Type":"ContainerStarted","Data":"86f9e2adaf8c302f791065aa8563163667787627339cea87f7335adf6eb84d3c"} Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.167533 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"b5aef521-fefa-4878-a4b1-3524e7a9b262","Type":"ContainerStarted","Data":"e4c7c7ed7ccd91d2662e16103c075a8242bdb82cf7bc5d42b5d62d10756d88ad"} Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.167766 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"b5aef521-fefa-4878-a4b1-3524e7a9b262","Type":"ContainerStarted","Data":"2b6a421886320a7ac4c7210b858ee2817c26dd7f3e2979a1da45fa6355875b9f"} Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.168019 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="b5aef521-fefa-4878-a4b1-3524e7a9b262" containerName="glance-log" containerID="cri-o://2b6a421886320a7ac4c7210b858ee2817c26dd7f3e2979a1da45fa6355875b9f" gracePeriod=30 Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.168165 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="b5aef521-fefa-4878-a4b1-3524e7a9b262" containerName="glance-httpd" containerID="cri-o://e4c7c7ed7ccd91d2662e16103c075a8242bdb82cf7bc5d42b5d62d10756d88ad" gracePeriod=30 Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.182764 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=3.182739347 podStartE2EDuration="3.182739347s" podCreationTimestamp="2025-11-28 12:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:18:14.180789484 +0000 UTC m=+1512.122532167" watchObservedRunningTime="2025-11-28 12:18:14.182739347 +0000 UTC m=+1512.124482030" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.211112 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-1" podStartSLOduration=3.211090505 podStartE2EDuration="3.211090505s" podCreationTimestamp="2025-11-28 12:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:18:14.209863882 +0000 UTC m=+1512.151606565" watchObservedRunningTime="2025-11-28 12:18:14.211090505 +0000 UTC m=+1512.152833188" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.874590 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.903119 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-sys\") pod \"b5aef521-fefa-4878-a4b1-3524e7a9b262\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.903357 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"b5aef521-fefa-4878-a4b1-3524e7a9b262\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.903508 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-etc-iscsi\") pod \"b5aef521-fefa-4878-a4b1-3524e7a9b262\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.903714 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b5aef521-fefa-4878-a4b1-3524e7a9b262-httpd-run\") pod \"b5aef521-fefa-4878-a4b1-3524e7a9b262\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.903826 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-var-locks-brick\") pod \"b5aef521-fefa-4878-a4b1-3524e7a9b262\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.903308 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-sys" (OuterVolumeSpecName: "sys") pod "b5aef521-fefa-4878-a4b1-3524e7a9b262" (UID: "b5aef521-fefa-4878-a4b1-3524e7a9b262"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.904005 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "b5aef521-fefa-4878-a4b1-3524e7a9b262" (UID: "b5aef521-fefa-4878-a4b1-3524e7a9b262"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.904156 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "b5aef521-fefa-4878-a4b1-3524e7a9b262" (UID: "b5aef521-fefa-4878-a4b1-3524e7a9b262"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.903942 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5aef521-fefa-4878-a4b1-3524e7a9b262-config-data\") pod \"b5aef521-fefa-4878-a4b1-3524e7a9b262\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.904296 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5aef521-fefa-4878-a4b1-3524e7a9b262-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b5aef521-fefa-4878-a4b1-3524e7a9b262" (UID: "b5aef521-fefa-4878-a4b1-3524e7a9b262"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.904599 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-etc-nvme\") pod \"b5aef521-fefa-4878-a4b1-3524e7a9b262\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.904791 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-run\") pod \"b5aef521-fefa-4878-a4b1-3524e7a9b262\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.904728 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "b5aef521-fefa-4878-a4b1-3524e7a9b262" (UID: "b5aef521-fefa-4878-a4b1-3524e7a9b262"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.904916 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-run" (OuterVolumeSpecName: "run") pod "b5aef521-fefa-4878-a4b1-3524e7a9b262" (UID: "b5aef521-fefa-4878-a4b1-3524e7a9b262"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.905189 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5aef521-fefa-4878-a4b1-3524e7a9b262-logs\") pod \"b5aef521-fefa-4878-a4b1-3524e7a9b262\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.905343 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"b5aef521-fefa-4878-a4b1-3524e7a9b262\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.905776 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-lib-modules\") pod \"b5aef521-fefa-4878-a4b1-3524e7a9b262\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.905916 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5aef521-fefa-4878-a4b1-3524e7a9b262-scripts\") pod \"b5aef521-fefa-4878-a4b1-3524e7a9b262\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.906029 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-dev\") pod \"b5aef521-fefa-4878-a4b1-3524e7a9b262\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.906157 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr22j\" (UniqueName: \"kubernetes.io/projected/b5aef521-fefa-4878-a4b1-3524e7a9b262-kube-api-access-hr22j\") pod \"b5aef521-fefa-4878-a4b1-3524e7a9b262\" (UID: \"b5aef521-fefa-4878-a4b1-3524e7a9b262\") " Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.905555 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5aef521-fefa-4878-a4b1-3524e7a9b262-logs" (OuterVolumeSpecName: "logs") pod "b5aef521-fefa-4878-a4b1-3524e7a9b262" (UID: "b5aef521-fefa-4878-a4b1-3524e7a9b262"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.906641 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-dev" (OuterVolumeSpecName: "dev") pod "b5aef521-fefa-4878-a4b1-3524e7a9b262" (UID: "b5aef521-fefa-4878-a4b1-3524e7a9b262"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.907148 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b5aef521-fefa-4878-a4b1-3524e7a9b262" (UID: "b5aef521-fefa-4878-a4b1-3524e7a9b262"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.910544 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.910741 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.910840 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5aef521-fefa-4878-a4b1-3524e7a9b262-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.911216 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.911287 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.911366 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.911443 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.911542 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b5aef521-fefa-4878-a4b1-3524e7a9b262-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.911619 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b5aef521-fefa-4878-a4b1-3524e7a9b262-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.911531 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5aef521-fefa-4878-a4b1-3524e7a9b262-kube-api-access-hr22j" (OuterVolumeSpecName: "kube-api-access-hr22j") pod "b5aef521-fefa-4878-a4b1-3524e7a9b262" (UID: "b5aef521-fefa-4878-a4b1-3524e7a9b262"). InnerVolumeSpecName "kube-api-access-hr22j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.912888 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "b5aef521-fefa-4878-a4b1-3524e7a9b262" (UID: "b5aef521-fefa-4878-a4b1-3524e7a9b262"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.919687 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance-cache") pod "b5aef521-fefa-4878-a4b1-3524e7a9b262" (UID: "b5aef521-fefa-4878-a4b1-3524e7a9b262"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.928177 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5aef521-fefa-4878-a4b1-3524e7a9b262-scripts" (OuterVolumeSpecName: "scripts") pod "b5aef521-fefa-4878-a4b1-3524e7a9b262" (UID: "b5aef521-fefa-4878-a4b1-3524e7a9b262"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:18:14 crc kubenswrapper[5030]: I1128 12:18:14.971437 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5aef521-fefa-4878-a4b1-3524e7a9b262-config-data" (OuterVolumeSpecName: "config-data") pod "b5aef521-fefa-4878-a4b1-3524e7a9b262" (UID: "b5aef521-fefa-4878-a4b1-3524e7a9b262"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.013999 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.014031 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5aef521-fefa-4878-a4b1-3524e7a9b262-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.014041 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr22j\" (UniqueName: \"kubernetes.io/projected/b5aef521-fefa-4878-a4b1-3524e7a9b262-kube-api-access-hr22j\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.014060 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.014073 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5aef521-fefa-4878-a4b1-3524e7a9b262-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.029382 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.033838 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.115157 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.115206 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.185158 5030 generic.go:334] "Generic (PLEG): container finished" podID="b5aef521-fefa-4878-a4b1-3524e7a9b262" containerID="e4c7c7ed7ccd91d2662e16103c075a8242bdb82cf7bc5d42b5d62d10756d88ad" exitCode=0 Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.185199 5030 generic.go:334] "Generic (PLEG): container finished" podID="b5aef521-fefa-4878-a4b1-3524e7a9b262" containerID="2b6a421886320a7ac4c7210b858ee2817c26dd7f3e2979a1da45fa6355875b9f" exitCode=143 Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.186231 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.188687 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"b5aef521-fefa-4878-a4b1-3524e7a9b262","Type":"ContainerDied","Data":"e4c7c7ed7ccd91d2662e16103c075a8242bdb82cf7bc5d42b5d62d10756d88ad"} Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.188900 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"b5aef521-fefa-4878-a4b1-3524e7a9b262","Type":"ContainerDied","Data":"2b6a421886320a7ac4c7210b858ee2817c26dd7f3e2979a1da45fa6355875b9f"} Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.189091 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"b5aef521-fefa-4878-a4b1-3524e7a9b262","Type":"ContainerDied","Data":"910a76486ccf2ad040c1bae73a7bf4c7e1e52c73804f344d3dee41c30948163b"} Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.188994 5030 scope.go:117] "RemoveContainer" containerID="e4c7c7ed7ccd91d2662e16103c075a8242bdb82cf7bc5d42b5d62d10756d88ad" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.232665 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.232912 5030 scope.go:117] "RemoveContainer" containerID="2b6a421886320a7ac4c7210b858ee2817c26dd7f3e2979a1da45fa6355875b9f" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.238482 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.264207 5030 scope.go:117] "RemoveContainer" containerID="e4c7c7ed7ccd91d2662e16103c075a8242bdb82cf7bc5d42b5d62d10756d88ad" Nov 28 12:18:15 crc kubenswrapper[5030]: E1128 12:18:15.265749 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4c7c7ed7ccd91d2662e16103c075a8242bdb82cf7bc5d42b5d62d10756d88ad\": container with ID starting with e4c7c7ed7ccd91d2662e16103c075a8242bdb82cf7bc5d42b5d62d10756d88ad not found: ID does not exist" containerID="e4c7c7ed7ccd91d2662e16103c075a8242bdb82cf7bc5d42b5d62d10756d88ad" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.265835 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4c7c7ed7ccd91d2662e16103c075a8242bdb82cf7bc5d42b5d62d10756d88ad"} err="failed to get container status \"e4c7c7ed7ccd91d2662e16103c075a8242bdb82cf7bc5d42b5d62d10756d88ad\": rpc error: code = NotFound desc = could not find container \"e4c7c7ed7ccd91d2662e16103c075a8242bdb82cf7bc5d42b5d62d10756d88ad\": container with ID starting with e4c7c7ed7ccd91d2662e16103c075a8242bdb82cf7bc5d42b5d62d10756d88ad not found: ID does not exist" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.265888 5030 scope.go:117] "RemoveContainer" containerID="2b6a421886320a7ac4c7210b858ee2817c26dd7f3e2979a1da45fa6355875b9f" Nov 28 12:18:15 crc kubenswrapper[5030]: E1128 12:18:15.267632 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b6a421886320a7ac4c7210b858ee2817c26dd7f3e2979a1da45fa6355875b9f\": container with ID starting with 2b6a421886320a7ac4c7210b858ee2817c26dd7f3e2979a1da45fa6355875b9f not found: ID does not exist" containerID="2b6a421886320a7ac4c7210b858ee2817c26dd7f3e2979a1da45fa6355875b9f" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.267678 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b6a421886320a7ac4c7210b858ee2817c26dd7f3e2979a1da45fa6355875b9f"} err="failed to get container status \"2b6a421886320a7ac4c7210b858ee2817c26dd7f3e2979a1da45fa6355875b9f\": rpc error: code = NotFound desc = could not find container \"2b6a421886320a7ac4c7210b858ee2817c26dd7f3e2979a1da45fa6355875b9f\": container with ID starting with 2b6a421886320a7ac4c7210b858ee2817c26dd7f3e2979a1da45fa6355875b9f not found: ID does not exist" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.267715 5030 scope.go:117] "RemoveContainer" containerID="e4c7c7ed7ccd91d2662e16103c075a8242bdb82cf7bc5d42b5d62d10756d88ad" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.268283 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4c7c7ed7ccd91d2662e16103c075a8242bdb82cf7bc5d42b5d62d10756d88ad"} err="failed to get container status \"e4c7c7ed7ccd91d2662e16103c075a8242bdb82cf7bc5d42b5d62d10756d88ad\": rpc error: code = NotFound desc = could not find container \"e4c7c7ed7ccd91d2662e16103c075a8242bdb82cf7bc5d42b5d62d10756d88ad\": container with ID starting with e4c7c7ed7ccd91d2662e16103c075a8242bdb82cf7bc5d42b5d62d10756d88ad not found: ID does not exist" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.268308 5030 scope.go:117] "RemoveContainer" containerID="2b6a421886320a7ac4c7210b858ee2817c26dd7f3e2979a1da45fa6355875b9f" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.268677 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b6a421886320a7ac4c7210b858ee2817c26dd7f3e2979a1da45fa6355875b9f"} err="failed to get container status \"2b6a421886320a7ac4c7210b858ee2817c26dd7f3e2979a1da45fa6355875b9f\": rpc error: code = NotFound desc = could not find container \"2b6a421886320a7ac4c7210b858ee2817c26dd7f3e2979a1da45fa6355875b9f\": container with ID starting with 2b6a421886320a7ac4c7210b858ee2817c26dd7f3e2979a1da45fa6355875b9f not found: ID does not exist" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.275512 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:18:15 crc kubenswrapper[5030]: E1128 12:18:15.277002 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5aef521-fefa-4878-a4b1-3524e7a9b262" containerName="glance-log" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.277061 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5aef521-fefa-4878-a4b1-3524e7a9b262" containerName="glance-log" Nov 28 12:18:15 crc kubenswrapper[5030]: E1128 12:18:15.277103 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5aef521-fefa-4878-a4b1-3524e7a9b262" containerName="glance-httpd" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.277118 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5aef521-fefa-4878-a4b1-3524e7a9b262" containerName="glance-httpd" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.277526 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5aef521-fefa-4878-a4b1-3524e7a9b262" containerName="glance-httpd" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.277572 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5aef521-fefa-4878-a4b1-3524e7a9b262" containerName="glance-log" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.279155 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.295213 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.429592 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-run\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.429998 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b290bf2-1b37-4532-8709-b7d38bdce138-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.430031 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b290bf2-1b37-4532-8709-b7d38bdce138-scripts\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.430072 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.430101 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.430181 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.430239 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hntl8\" (UniqueName: \"kubernetes.io/projected/8b290bf2-1b37-4532-8709-b7d38bdce138-kube-api-access-hntl8\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.430272 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.430302 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.430357 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-sys\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.430388 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.430452 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b290bf2-1b37-4532-8709-b7d38bdce138-config-data\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.430497 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-dev\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.430518 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b290bf2-1b37-4532-8709-b7d38bdce138-logs\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.532380 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b290bf2-1b37-4532-8709-b7d38bdce138-logs\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.532437 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-run\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.532453 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b290bf2-1b37-4532-8709-b7d38bdce138-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.532501 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b290bf2-1b37-4532-8709-b7d38bdce138-scripts\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.532550 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.532584 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.532605 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.532631 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hntl8\" (UniqueName: \"kubernetes.io/projected/8b290bf2-1b37-4532-8709-b7d38bdce138-kube-api-access-hntl8\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.532671 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.532710 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.532743 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-sys\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.532771 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.532818 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b290bf2-1b37-4532-8709-b7d38bdce138-config-data\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.532847 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-dev\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.532937 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-dev\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.533588 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b290bf2-1b37-4532-8709-b7d38bdce138-logs\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.534063 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-run\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.534639 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.534722 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.534777 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-sys\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.534844 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.534868 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.535107 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.535125 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.534849 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b290bf2-1b37-4532-8709-b7d38bdce138-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.542600 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b290bf2-1b37-4532-8709-b7d38bdce138-config-data\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.544041 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b290bf2-1b37-4532-8709-b7d38bdce138-scripts\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.560976 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.564239 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hntl8\" (UniqueName: \"kubernetes.io/projected/8b290bf2-1b37-4532-8709-b7d38bdce138-kube-api-access-hntl8\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.578070 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-1\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:15 crc kubenswrapper[5030]: I1128 12:18:15.614571 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:16 crc kubenswrapper[5030]: I1128 12:18:16.095916 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:18:16 crc kubenswrapper[5030]: I1128 12:18:16.196261 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"8b290bf2-1b37-4532-8709-b7d38bdce138","Type":"ContainerStarted","Data":"22c249f741f3a5e03ae327887f157211862bdb0ae08cbba251f968c5476a9beb"} Nov 28 12:18:16 crc kubenswrapper[5030]: I1128 12:18:16.393235 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:18:16 crc kubenswrapper[5030]: E1128 12:18:16.394111 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:18:16 crc kubenswrapper[5030]: I1128 12:18:16.406705 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5aef521-fefa-4878-a4b1-3524e7a9b262" path="/var/lib/kubelet/pods/b5aef521-fefa-4878-a4b1-3524e7a9b262/volumes" Nov 28 12:18:17 crc kubenswrapper[5030]: I1128 12:18:17.206797 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"8b290bf2-1b37-4532-8709-b7d38bdce138","Type":"ContainerStarted","Data":"896c0e151b90659c0b95428c7bb5c72a9aa2e5dce3bf9c6cbff6ca87a73cda03"} Nov 28 12:18:17 crc kubenswrapper[5030]: I1128 12:18:17.207241 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"8b290bf2-1b37-4532-8709-b7d38bdce138","Type":"ContainerStarted","Data":"d34f30a9ced201719564cf8eec7f68778a4fa595f93cdac7417d9ff8755304e0"} Nov 28 12:18:17 crc kubenswrapper[5030]: I1128 12:18:17.254695 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-1" podStartSLOduration=2.254663145 podStartE2EDuration="2.254663145s" podCreationTimestamp="2025-11-28 12:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:18:17.242222328 +0000 UTC m=+1515.183965051" watchObservedRunningTime="2025-11-28 12:18:17.254663145 +0000 UTC m=+1515.196405858" Nov 28 12:18:21 crc kubenswrapper[5030]: I1128 12:18:21.844451 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:21 crc kubenswrapper[5030]: I1128 12:18:21.845119 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:21 crc kubenswrapper[5030]: I1128 12:18:21.894981 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:21 crc kubenswrapper[5030]: I1128 12:18:21.927941 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:22 crc kubenswrapper[5030]: I1128 12:18:22.207007 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:22 crc kubenswrapper[5030]: I1128 12:18:22.207103 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:22 crc kubenswrapper[5030]: I1128 12:18:22.239365 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:22 crc kubenswrapper[5030]: I1128 12:18:22.266363 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:22 crc kubenswrapper[5030]: I1128 12:18:22.268816 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:22 crc kubenswrapper[5030]: I1128 12:18:22.268877 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:22 crc kubenswrapper[5030]: I1128 12:18:22.268896 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:22 crc kubenswrapper[5030]: I1128 12:18:22.268914 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:22 crc kubenswrapper[5030]: I1128 12:18:22.834881 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:22 crc kubenswrapper[5030]: I1128 12:18:22.834953 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:22 crc kubenswrapper[5030]: I1128 12:18:22.902119 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:22 crc kubenswrapper[5030]: I1128 12:18:22.907101 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:23 crc kubenswrapper[5030]: I1128 12:18:23.276369 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:23 crc kubenswrapper[5030]: I1128 12:18:23.276445 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:24 crc kubenswrapper[5030]: I1128 12:18:24.149342 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:24 crc kubenswrapper[5030]: I1128 12:18:24.282805 5030 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 12:18:24 crc kubenswrapper[5030]: I1128 12:18:24.302507 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:24 crc kubenswrapper[5030]: I1128 12:18:24.302653 5030 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 12:18:24 crc kubenswrapper[5030]: I1128 12:18:24.310230 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:18:24 crc kubenswrapper[5030]: I1128 12:18:24.322409 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:24 crc kubenswrapper[5030]: I1128 12:18:24.417133 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:18:25 crc kubenswrapper[5030]: I1128 12:18:25.243823 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:25 crc kubenswrapper[5030]: I1128 12:18:25.246608 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:25 crc kubenswrapper[5030]: I1128 12:18:25.615998 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:25 crc kubenswrapper[5030]: I1128 12:18:25.616048 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:25 crc kubenswrapper[5030]: I1128 12:18:25.644357 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:25 crc kubenswrapper[5030]: I1128 12:18:25.656711 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:26 crc kubenswrapper[5030]: I1128 12:18:26.300071 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="c83086a2-0b01-46d3-9eca-a78e189901e4" containerName="glance-log" containerID="cri-o://af8dfc7abded04853a71ef168a756891b165c11f02057fce8da5f0823dca98ce" gracePeriod=30 Nov 28 12:18:26 crc kubenswrapper[5030]: I1128 12:18:26.300213 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="c83086a2-0b01-46d3-9eca-a78e189901e4" containerName="glance-httpd" containerID="cri-o://7171f2008f9fbb42654305c05c9baebc9ad3bd551062f97caf46fe9c74535332" gracePeriod=30 Nov 28 12:18:26 crc kubenswrapper[5030]: I1128 12:18:26.300613 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:26 crc kubenswrapper[5030]: I1128 12:18:26.300673 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:27 crc kubenswrapper[5030]: I1128 12:18:27.315496 5030 generic.go:334] "Generic (PLEG): container finished" podID="c83086a2-0b01-46d3-9eca-a78e189901e4" containerID="af8dfc7abded04853a71ef168a756891b165c11f02057fce8da5f0823dca98ce" exitCode=143 Nov 28 12:18:27 crc kubenswrapper[5030]: I1128 12:18:27.315987 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"c83086a2-0b01-46d3-9eca-a78e189901e4","Type":"ContainerDied","Data":"af8dfc7abded04853a71ef168a756891b165c11f02057fce8da5f0823dca98ce"} Nov 28 12:18:28 crc kubenswrapper[5030]: I1128 12:18:28.141161 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:28 crc kubenswrapper[5030]: I1128 12:18:28.273188 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:18:28 crc kubenswrapper[5030]: I1128 12:18:28.341965 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:18:28 crc kubenswrapper[5030]: I1128 12:18:28.342273 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="168401c9-db52-49d2-8cf7-988b60e50065" containerName="glance-log" containerID="cri-o://6c6fefa2bc9dfa798ad84e135229af8f9305e1905115acdffc35215bd80b1eac" gracePeriod=30 Nov 28 12:18:28 crc kubenswrapper[5030]: I1128 12:18:28.342385 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="168401c9-db52-49d2-8cf7-988b60e50065" containerName="glance-httpd" containerID="cri-o://d16766403a8893ecb48e0aecd7210763aebb949a4fe39115deb346a4c0595537" gracePeriod=30 Nov 28 12:18:29 crc kubenswrapper[5030]: I1128 12:18:29.359300 5030 generic.go:334] "Generic (PLEG): container finished" podID="168401c9-db52-49d2-8cf7-988b60e50065" containerID="6c6fefa2bc9dfa798ad84e135229af8f9305e1905115acdffc35215bd80b1eac" exitCode=143 Nov 28 12:18:29 crc kubenswrapper[5030]: I1128 12:18:29.359394 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"168401c9-db52-49d2-8cf7-988b60e50065","Type":"ContainerDied","Data":"6c6fefa2bc9dfa798ad84e135229af8f9305e1905115acdffc35215bd80b1eac"} Nov 28 12:18:29 crc kubenswrapper[5030]: I1128 12:18:29.393222 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:18:29 crc kubenswrapper[5030]: E1128 12:18:29.393522 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.017629 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.056065 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c83086a2-0b01-46d3-9eca-a78e189901e4-logs\") pod \"c83086a2-0b01-46d3-9eca-a78e189901e4\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.056616 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-etc-nvme\") pod \"c83086a2-0b01-46d3-9eca-a78e189901e4\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.056733 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-lib-modules\") pod \"c83086a2-0b01-46d3-9eca-a78e189901e4\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.056865 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83086a2-0b01-46d3-9eca-a78e189901e4-config-data\") pod \"c83086a2-0b01-46d3-9eca-a78e189901e4\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.057030 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmmmz\" (UniqueName: \"kubernetes.io/projected/c83086a2-0b01-46d3-9eca-a78e189901e4-kube-api-access-qmmmz\") pod \"c83086a2-0b01-46d3-9eca-a78e189901e4\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.057127 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c83086a2-0b01-46d3-9eca-a78e189901e4-scripts\") pod \"c83086a2-0b01-46d3-9eca-a78e189901e4\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.057288 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-dev\") pod \"c83086a2-0b01-46d3-9eca-a78e189901e4\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.057391 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-sys\") pod \"c83086a2-0b01-46d3-9eca-a78e189901e4\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.057591 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-run\") pod \"c83086a2-0b01-46d3-9eca-a78e189901e4\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.057694 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"c83086a2-0b01-46d3-9eca-a78e189901e4\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.057796 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c83086a2-0b01-46d3-9eca-a78e189901e4-httpd-run\") pod \"c83086a2-0b01-46d3-9eca-a78e189901e4\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.057945 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"c83086a2-0b01-46d3-9eca-a78e189901e4\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.058054 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-etc-iscsi\") pod \"c83086a2-0b01-46d3-9eca-a78e189901e4\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.059705 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-var-locks-brick\") pod \"c83086a2-0b01-46d3-9eca-a78e189901e4\" (UID: \"c83086a2-0b01-46d3-9eca-a78e189901e4\") " Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.060246 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-dev" (OuterVolumeSpecName: "dev") pod "c83086a2-0b01-46d3-9eca-a78e189901e4" (UID: "c83086a2-0b01-46d3-9eca-a78e189901e4"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.060628 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.061274 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-sys" (OuterVolumeSpecName: "sys") pod "c83086a2-0b01-46d3-9eca-a78e189901e4" (UID: "c83086a2-0b01-46d3-9eca-a78e189901e4"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.061779 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-run" (OuterVolumeSpecName: "run") pod "c83086a2-0b01-46d3-9eca-a78e189901e4" (UID: "c83086a2-0b01-46d3-9eca-a78e189901e4"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.061815 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c83086a2-0b01-46d3-9eca-a78e189901e4" (UID: "c83086a2-0b01-46d3-9eca-a78e189901e4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.061905 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "c83086a2-0b01-46d3-9eca-a78e189901e4" (UID: "c83086a2-0b01-46d3-9eca-a78e189901e4"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.061994 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "c83086a2-0b01-46d3-9eca-a78e189901e4" (UID: "c83086a2-0b01-46d3-9eca-a78e189901e4"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.062112 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c83086a2-0b01-46d3-9eca-a78e189901e4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c83086a2-0b01-46d3-9eca-a78e189901e4" (UID: "c83086a2-0b01-46d3-9eca-a78e189901e4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.062229 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "c83086a2-0b01-46d3-9eca-a78e189901e4" (UID: "c83086a2-0b01-46d3-9eca-a78e189901e4"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.065063 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c83086a2-0b01-46d3-9eca-a78e189901e4-logs" (OuterVolumeSpecName: "logs") pod "c83086a2-0b01-46d3-9eca-a78e189901e4" (UID: "c83086a2-0b01-46d3-9eca-a78e189901e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.068915 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83086a2-0b01-46d3-9eca-a78e189901e4-scripts" (OuterVolumeSpecName: "scripts") pod "c83086a2-0b01-46d3-9eca-a78e189901e4" (UID: "c83086a2-0b01-46d3-9eca-a78e189901e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.069101 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage13-crc" (OuterVolumeSpecName: "glance") pod "c83086a2-0b01-46d3-9eca-a78e189901e4" (UID: "c83086a2-0b01-46d3-9eca-a78e189901e4"). InnerVolumeSpecName "local-storage13-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.070345 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c83086a2-0b01-46d3-9eca-a78e189901e4-kube-api-access-qmmmz" (OuterVolumeSpecName: "kube-api-access-qmmmz") pod "c83086a2-0b01-46d3-9eca-a78e189901e4" (UID: "c83086a2-0b01-46d3-9eca-a78e189901e4"). InnerVolumeSpecName "kube-api-access-qmmmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.086720 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance-cache") pod "c83086a2-0b01-46d3-9eca-a78e189901e4" (UID: "c83086a2-0b01-46d3-9eca-a78e189901e4"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.114676 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83086a2-0b01-46d3-9eca-a78e189901e4-config-data" (OuterVolumeSpecName: "config-data") pod "c83086a2-0b01-46d3-9eca-a78e189901e4" (UID: "c83086a2-0b01-46d3-9eca-a78e189901e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.162367 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.162408 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.162421 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83086a2-0b01-46d3-9eca-a78e189901e4-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.162435 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmmmz\" (UniqueName: \"kubernetes.io/projected/c83086a2-0b01-46d3-9eca-a78e189901e4-kube-api-access-qmmmz\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.162449 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c83086a2-0b01-46d3-9eca-a78e189901e4-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.162483 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.162495 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.162535 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.162548 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c83086a2-0b01-46d3-9eca-a78e189901e4-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.162567 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" " Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.162578 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.162589 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c83086a2-0b01-46d3-9eca-a78e189901e4-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.162600 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c83086a2-0b01-46d3-9eca-a78e189901e4-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.178291 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.189627 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage13-crc" (UniqueName: "kubernetes.io/local-volume/local-storage13-crc") on node "crc" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.264256 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.264312 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.370540 5030 generic.go:334] "Generic (PLEG): container finished" podID="c83086a2-0b01-46d3-9eca-a78e189901e4" containerID="7171f2008f9fbb42654305c05c9baebc9ad3bd551062f97caf46fe9c74535332" exitCode=0 Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.370601 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"c83086a2-0b01-46d3-9eca-a78e189901e4","Type":"ContainerDied","Data":"7171f2008f9fbb42654305c05c9baebc9ad3bd551062f97caf46fe9c74535332"} Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.370622 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.370646 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"c83086a2-0b01-46d3-9eca-a78e189901e4","Type":"ContainerDied","Data":"6eefbe405e49e67734343d853c3e1304f0afcd9b1244db9e5849369d4e28b7ae"} Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.370673 5030 scope.go:117] "RemoveContainer" containerID="7171f2008f9fbb42654305c05c9baebc9ad3bd551062f97caf46fe9c74535332" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.401895 5030 scope.go:117] "RemoveContainer" containerID="af8dfc7abded04853a71ef168a756891b165c11f02057fce8da5f0823dca98ce" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.409376 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.415991 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.423710 5030 scope.go:117] "RemoveContainer" containerID="7171f2008f9fbb42654305c05c9baebc9ad3bd551062f97caf46fe9c74535332" Nov 28 12:18:30 crc kubenswrapper[5030]: E1128 12:18:30.425788 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7171f2008f9fbb42654305c05c9baebc9ad3bd551062f97caf46fe9c74535332\": container with ID starting with 7171f2008f9fbb42654305c05c9baebc9ad3bd551062f97caf46fe9c74535332 not found: ID does not exist" containerID="7171f2008f9fbb42654305c05c9baebc9ad3bd551062f97caf46fe9c74535332" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.425847 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7171f2008f9fbb42654305c05c9baebc9ad3bd551062f97caf46fe9c74535332"} err="failed to get container status \"7171f2008f9fbb42654305c05c9baebc9ad3bd551062f97caf46fe9c74535332\": rpc error: code = NotFound desc = could not find container \"7171f2008f9fbb42654305c05c9baebc9ad3bd551062f97caf46fe9c74535332\": container with ID starting with 7171f2008f9fbb42654305c05c9baebc9ad3bd551062f97caf46fe9c74535332 not found: ID does not exist" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.425887 5030 scope.go:117] "RemoveContainer" containerID="af8dfc7abded04853a71ef168a756891b165c11f02057fce8da5f0823dca98ce" Nov 28 12:18:30 crc kubenswrapper[5030]: E1128 12:18:30.426419 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af8dfc7abded04853a71ef168a756891b165c11f02057fce8da5f0823dca98ce\": container with ID starting with af8dfc7abded04853a71ef168a756891b165c11f02057fce8da5f0823dca98ce not found: ID does not exist" containerID="af8dfc7abded04853a71ef168a756891b165c11f02057fce8da5f0823dca98ce" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.426588 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af8dfc7abded04853a71ef168a756891b165c11f02057fce8da5f0823dca98ce"} err="failed to get container status \"af8dfc7abded04853a71ef168a756891b165c11f02057fce8da5f0823dca98ce\": rpc error: code = NotFound desc = could not find container \"af8dfc7abded04853a71ef168a756891b165c11f02057fce8da5f0823dca98ce\": container with ID starting with af8dfc7abded04853a71ef168a756891b165c11f02057fce8da5f0823dca98ce not found: ID does not exist" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.439325 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:18:30 crc kubenswrapper[5030]: E1128 12:18:30.439727 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83086a2-0b01-46d3-9eca-a78e189901e4" containerName="glance-log" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.439753 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83086a2-0b01-46d3-9eca-a78e189901e4" containerName="glance-log" Nov 28 12:18:30 crc kubenswrapper[5030]: E1128 12:18:30.439767 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83086a2-0b01-46d3-9eca-a78e189901e4" containerName="glance-httpd" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.439776 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83086a2-0b01-46d3-9eca-a78e189901e4" containerName="glance-httpd" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.439969 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83086a2-0b01-46d3-9eca-a78e189901e4" containerName="glance-log" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.439988 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83086a2-0b01-46d3-9eca-a78e189901e4" containerName="glance-httpd" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.441006 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.459523 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.569796 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-logs\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.569892 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-dev\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.569927 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-config-data\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.570016 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-scripts\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.570054 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.570081 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-run\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.570211 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.570264 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.570287 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c55f\" (UniqueName: \"kubernetes.io/projected/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-kube-api-access-7c55f\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.570310 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.570362 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.570400 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.570433 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.570455 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-sys\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.671981 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.672807 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.672877 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.672698 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") device mount path \"/mnt/openstack/pv13\"" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.673006 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.673075 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-sys\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.673155 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-logs\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.673249 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-dev\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.673324 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-config-data\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.673405 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-scripts\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.673521 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.673626 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-run\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.673709 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.673771 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-run\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.673251 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-sys\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.673792 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.673911 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.673944 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c55f\" (UniqueName: \"kubernetes.io/projected/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-kube-api-access-7c55f\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.674005 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.673255 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.673802 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-logs\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.673346 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-dev\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.674141 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.674262 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.674272 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") device mount path \"/mnt/openstack/pv12\"" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.679402 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-scripts\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.682450 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-config-data\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.698055 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.704129 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c55f\" (UniqueName: \"kubernetes.io/projected/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-kube-api-access-7c55f\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.709108 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-0\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:30 crc kubenswrapper[5030]: I1128 12:18:30.763579 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.241221 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:18:31 crc kubenswrapper[5030]: W1128 12:18:31.250031 5030 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79ae9917_8b41_4fd9_a1bc_1bf8b0467da5.slice/crio-6e99844c9d79b3632a29d577ec05fc9ad9e58424e834bff04ee70172afe881da WatchSource:0}: Error finding container 6e99844c9d79b3632a29d577ec05fc9ad9e58424e834bff04ee70172afe881da: Status 404 returned error can't find the container with id 6e99844c9d79b3632a29d577ec05fc9ad9e58424e834bff04ee70172afe881da Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.381745 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5","Type":"ContainerStarted","Data":"6e99844c9d79b3632a29d577ec05fc9ad9e58424e834bff04ee70172afe881da"} Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.884666 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.997139 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-etc-iscsi\") pod \"168401c9-db52-49d2-8cf7-988b60e50065\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.997207 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"168401c9-db52-49d2-8cf7-988b60e50065\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.997238 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-run\") pod \"168401c9-db52-49d2-8cf7-988b60e50065\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.997267 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-var-locks-brick\") pod \"168401c9-db52-49d2-8cf7-988b60e50065\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.997317 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/168401c9-db52-49d2-8cf7-988b60e50065-scripts\") pod \"168401c9-db52-49d2-8cf7-988b60e50065\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.997376 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-etc-nvme\") pod \"168401c9-db52-49d2-8cf7-988b60e50065\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.997404 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9tk5\" (UniqueName: \"kubernetes.io/projected/168401c9-db52-49d2-8cf7-988b60e50065-kube-api-access-x9tk5\") pod \"168401c9-db52-49d2-8cf7-988b60e50065\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.997443 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/168401c9-db52-49d2-8cf7-988b60e50065-config-data\") pod \"168401c9-db52-49d2-8cf7-988b60e50065\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.997492 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-lib-modules\") pod \"168401c9-db52-49d2-8cf7-988b60e50065\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.997540 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"168401c9-db52-49d2-8cf7-988b60e50065\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.997568 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/168401c9-db52-49d2-8cf7-988b60e50065-httpd-run\") pod \"168401c9-db52-49d2-8cf7-988b60e50065\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.997634 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-sys\") pod \"168401c9-db52-49d2-8cf7-988b60e50065\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.997654 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-dev\") pod \"168401c9-db52-49d2-8cf7-988b60e50065\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.997689 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/168401c9-db52-49d2-8cf7-988b60e50065-logs\") pod \"168401c9-db52-49d2-8cf7-988b60e50065\" (UID: \"168401c9-db52-49d2-8cf7-988b60e50065\") " Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.997798 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-sys" (OuterVolumeSpecName: "sys") pod "168401c9-db52-49d2-8cf7-988b60e50065" (UID: "168401c9-db52-49d2-8cf7-988b60e50065"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.997795 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "168401c9-db52-49d2-8cf7-988b60e50065" (UID: "168401c9-db52-49d2-8cf7-988b60e50065"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.997887 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "168401c9-db52-49d2-8cf7-988b60e50065" (UID: "168401c9-db52-49d2-8cf7-988b60e50065"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.998115 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.998131 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.998143 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.998183 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/168401c9-db52-49d2-8cf7-988b60e50065-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "168401c9-db52-49d2-8cf7-988b60e50065" (UID: "168401c9-db52-49d2-8cf7-988b60e50065"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.998496 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "168401c9-db52-49d2-8cf7-988b60e50065" (UID: "168401c9-db52-49d2-8cf7-988b60e50065"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.998540 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-dev" (OuterVolumeSpecName: "dev") pod "168401c9-db52-49d2-8cf7-988b60e50065" (UID: "168401c9-db52-49d2-8cf7-988b60e50065"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.998576 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-run" (OuterVolumeSpecName: "run") pod "168401c9-db52-49d2-8cf7-988b60e50065" (UID: "168401c9-db52-49d2-8cf7-988b60e50065"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.998623 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "168401c9-db52-49d2-8cf7-988b60e50065" (UID: "168401c9-db52-49d2-8cf7-988b60e50065"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:18:31 crc kubenswrapper[5030]: I1128 12:18:31.998840 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/168401c9-db52-49d2-8cf7-988b60e50065-logs" (OuterVolumeSpecName: "logs") pod "168401c9-db52-49d2-8cf7-988b60e50065" (UID: "168401c9-db52-49d2-8cf7-988b60e50065"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.004496 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168401c9-db52-49d2-8cf7-988b60e50065-scripts" (OuterVolumeSpecName: "scripts") pod "168401c9-db52-49d2-8cf7-988b60e50065" (UID: "168401c9-db52-49d2-8cf7-988b60e50065"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.004591 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance-cache") pod "168401c9-db52-49d2-8cf7-988b60e50065" (UID: "168401c9-db52-49d2-8cf7-988b60e50065"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.005177 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "168401c9-db52-49d2-8cf7-988b60e50065" (UID: "168401c9-db52-49d2-8cf7-988b60e50065"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.005233 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/168401c9-db52-49d2-8cf7-988b60e50065-kube-api-access-x9tk5" (OuterVolumeSpecName: "kube-api-access-x9tk5") pod "168401c9-db52-49d2-8cf7-988b60e50065" (UID: "168401c9-db52-49d2-8cf7-988b60e50065"). InnerVolumeSpecName "kube-api-access-x9tk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.063810 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168401c9-db52-49d2-8cf7-988b60e50065-config-data" (OuterVolumeSpecName: "config-data") pod "168401c9-db52-49d2-8cf7-988b60e50065" (UID: "168401c9-db52-49d2-8cf7-988b60e50065"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.100277 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.100311 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.100322 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/168401c9-db52-49d2-8cf7-988b60e50065-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.100332 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9tk5\" (UniqueName: \"kubernetes.io/projected/168401c9-db52-49d2-8cf7-988b60e50065-kube-api-access-x9tk5\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.100341 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/168401c9-db52-49d2-8cf7-988b60e50065-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.100351 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.100387 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.100396 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/168401c9-db52-49d2-8cf7-988b60e50065-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.100404 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/168401c9-db52-49d2-8cf7-988b60e50065-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.100412 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/168401c9-db52-49d2-8cf7-988b60e50065-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.100426 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.113497 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.119739 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.204590 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.204643 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.399085 5030 generic.go:334] "Generic (PLEG): container finished" podID="168401c9-db52-49d2-8cf7-988b60e50065" containerID="d16766403a8893ecb48e0aecd7210763aebb949a4fe39115deb346a4c0595537" exitCode=0 Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.407770 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.411359 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83086a2-0b01-46d3-9eca-a78e189901e4" path="/var/lib/kubelet/pods/c83086a2-0b01-46d3-9eca-a78e189901e4/volumes" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.413025 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5","Type":"ContainerStarted","Data":"013354d21a664bc410882d9cb5ee4b82c3db157cf02ed14d44093697fc3be7fc"} Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.413074 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5","Type":"ContainerStarted","Data":"3619beffb11c6788fc77b0c6c3dbcb59c0da43d4311364acc06abd8289789d51"} Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.413096 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"168401c9-db52-49d2-8cf7-988b60e50065","Type":"ContainerDied","Data":"d16766403a8893ecb48e0aecd7210763aebb949a4fe39115deb346a4c0595537"} Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.413128 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"168401c9-db52-49d2-8cf7-988b60e50065","Type":"ContainerDied","Data":"86f9e2adaf8c302f791065aa8563163667787627339cea87f7335adf6eb84d3c"} Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.413168 5030 scope.go:117] "RemoveContainer" containerID="d16766403a8893ecb48e0aecd7210763aebb949a4fe39115deb346a4c0595537" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.469277 5030 scope.go:117] "RemoveContainer" containerID="6c6fefa2bc9dfa798ad84e135229af8f9305e1905115acdffc35215bd80b1eac" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.503794 5030 scope.go:117] "RemoveContainer" containerID="d16766403a8893ecb48e0aecd7210763aebb949a4fe39115deb346a4c0595537" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.504204 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-0" podStartSLOduration=2.50414817 podStartE2EDuration="2.50414817s" podCreationTimestamp="2025-11-28 12:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:18:32.488808884 +0000 UTC m=+1530.430551587" watchObservedRunningTime="2025-11-28 12:18:32.50414817 +0000 UTC m=+1530.445890873" Nov 28 12:18:32 crc kubenswrapper[5030]: E1128 12:18:32.504943 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d16766403a8893ecb48e0aecd7210763aebb949a4fe39115deb346a4c0595537\": container with ID starting with d16766403a8893ecb48e0aecd7210763aebb949a4fe39115deb346a4c0595537 not found: ID does not exist" containerID="d16766403a8893ecb48e0aecd7210763aebb949a4fe39115deb346a4c0595537" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.504979 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16766403a8893ecb48e0aecd7210763aebb949a4fe39115deb346a4c0595537"} err="failed to get container status \"d16766403a8893ecb48e0aecd7210763aebb949a4fe39115deb346a4c0595537\": rpc error: code = NotFound desc = could not find container \"d16766403a8893ecb48e0aecd7210763aebb949a4fe39115deb346a4c0595537\": container with ID starting with d16766403a8893ecb48e0aecd7210763aebb949a4fe39115deb346a4c0595537 not found: ID does not exist" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.505000 5030 scope.go:117] "RemoveContainer" containerID="6c6fefa2bc9dfa798ad84e135229af8f9305e1905115acdffc35215bd80b1eac" Nov 28 12:18:32 crc kubenswrapper[5030]: E1128 12:18:32.505751 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c6fefa2bc9dfa798ad84e135229af8f9305e1905115acdffc35215bd80b1eac\": container with ID starting with 6c6fefa2bc9dfa798ad84e135229af8f9305e1905115acdffc35215bd80b1eac not found: ID does not exist" containerID="6c6fefa2bc9dfa798ad84e135229af8f9305e1905115acdffc35215bd80b1eac" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.505776 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c6fefa2bc9dfa798ad84e135229af8f9305e1905115acdffc35215bd80b1eac"} err="failed to get container status \"6c6fefa2bc9dfa798ad84e135229af8f9305e1905115acdffc35215bd80b1eac\": rpc error: code = NotFound desc = could not find container \"6c6fefa2bc9dfa798ad84e135229af8f9305e1905115acdffc35215bd80b1eac\": container with ID starting with 6c6fefa2bc9dfa798ad84e135229af8f9305e1905115acdffc35215bd80b1eac not found: ID does not exist" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.530345 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.538854 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.557585 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:18:32 crc kubenswrapper[5030]: E1128 12:18:32.557910 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168401c9-db52-49d2-8cf7-988b60e50065" containerName="glance-log" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.557927 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="168401c9-db52-49d2-8cf7-988b60e50065" containerName="glance-log" Nov 28 12:18:32 crc kubenswrapper[5030]: E1128 12:18:32.557960 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168401c9-db52-49d2-8cf7-988b60e50065" containerName="glance-httpd" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.557968 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="168401c9-db52-49d2-8cf7-988b60e50065" containerName="glance-httpd" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.558101 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="168401c9-db52-49d2-8cf7-988b60e50065" containerName="glance-log" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.558114 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="168401c9-db52-49d2-8cf7-988b60e50065" containerName="glance-httpd" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.558907 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.577222 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.714236 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.714327 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-logs\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.714351 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.714373 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-sys\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.714402 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nch6r\" (UniqueName: \"kubernetes.io/projected/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-kube-api-access-nch6r\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.714430 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.714459 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.714525 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.714553 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.714589 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.714611 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.714639 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.714677 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-run\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.714708 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-dev\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.816606 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.816662 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.816693 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.816712 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.816736 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.816772 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-run\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.816797 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-dev\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.816821 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.816852 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-logs\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.816870 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-sys\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.816884 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.816906 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nch6r\" (UniqueName: \"kubernetes.io/projected/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-kube-api-access-nch6r\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.816923 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.816941 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.817216 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") device mount path \"/mnt/openstack/pv10\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.818011 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-dev\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.818229 5030 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") device mount path \"/mnt/openstack/pv09\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.819559 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.819636 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.819669 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.819751 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-sys\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.819785 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.819807 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-run\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.819786 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.819984 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-logs\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.825253 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.826553 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.839452 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nch6r\" (UniqueName: \"kubernetes.io/projected/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-kube-api-access-nch6r\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.860212 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.864381 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:32 crc kubenswrapper[5030]: I1128 12:18:32.941285 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:33 crc kubenswrapper[5030]: I1128 12:18:33.423301 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:18:34 crc kubenswrapper[5030]: I1128 12:18:34.406492 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="168401c9-db52-49d2-8cf7-988b60e50065" path="/var/lib/kubelet/pods/168401c9-db52-49d2-8cf7-988b60e50065/volumes" Nov 28 12:18:34 crc kubenswrapper[5030]: I1128 12:18:34.421828 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb","Type":"ContainerStarted","Data":"8f229126b32f77e8c9728409abb3ef64ed2eb7762e3dde1bbd2d43a3650ce27f"} Nov 28 12:18:34 crc kubenswrapper[5030]: I1128 12:18:34.421912 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb","Type":"ContainerStarted","Data":"4a034250df6d23ccc2ceb665e79f8c99eb3cc19b7fd1a31056c597251c6067e6"} Nov 28 12:18:34 crc kubenswrapper[5030]: I1128 12:18:34.421935 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb","Type":"ContainerStarted","Data":"c7d982a7bd7d67d7afb10291efec11b7cecf7407f2b6c56abd64b4caa9277196"} Nov 28 12:18:34 crc kubenswrapper[5030]: I1128 12:18:34.458515 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=2.458460352 podStartE2EDuration="2.458460352s" podCreationTimestamp="2025-11-28 12:18:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 12:18:34.452434499 +0000 UTC m=+1532.394177262" watchObservedRunningTime="2025-11-28 12:18:34.458460352 +0000 UTC m=+1532.400203075" Nov 28 12:18:40 crc kubenswrapper[5030]: I1128 12:18:40.764780 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:40 crc kubenswrapper[5030]: I1128 12:18:40.765918 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:40 crc kubenswrapper[5030]: I1128 12:18:40.829099 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:40 crc kubenswrapper[5030]: I1128 12:18:40.844964 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:41 crc kubenswrapper[5030]: I1128 12:18:41.496493 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:41 crc kubenswrapper[5030]: I1128 12:18:41.496996 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:42 crc kubenswrapper[5030]: I1128 12:18:42.399110 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:18:42 crc kubenswrapper[5030]: E1128 12:18:42.399406 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:18:42 crc kubenswrapper[5030]: I1128 12:18:42.941732 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:42 crc kubenswrapper[5030]: I1128 12:18:42.941808 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:42 crc kubenswrapper[5030]: I1128 12:18:42.978151 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:42 crc kubenswrapper[5030]: I1128 12:18:42.996339 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:43 crc kubenswrapper[5030]: I1128 12:18:43.409276 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:43 crc kubenswrapper[5030]: I1128 12:18:43.479683 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:18:43 crc kubenswrapper[5030]: I1128 12:18:43.536886 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:43 crc kubenswrapper[5030]: I1128 12:18:43.537253 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:45 crc kubenswrapper[5030]: I1128 12:18:45.366451 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:45 crc kubenswrapper[5030]: I1128 12:18:45.374257 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:18:57 crc kubenswrapper[5030]: I1128 12:18:57.393024 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:18:57 crc kubenswrapper[5030]: E1128 12:18:57.394669 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:19:03 crc kubenswrapper[5030]: I1128 12:19:03.603304 5030 scope.go:117] "RemoveContainer" containerID="5672d8f9ff3ac798cceacaab3d7180f209fcdbe28b413f1a650e33f582de3535" Nov 28 12:19:03 crc kubenswrapper[5030]: I1128 12:19:03.641146 5030 scope.go:117] "RemoveContainer" containerID="a2073e4ee8647538923d8e6e1752350724fb78bc31280b46be66e003aece4e32" Nov 28 12:19:10 crc kubenswrapper[5030]: I1128 12:19:10.393449 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:19:10 crc kubenswrapper[5030]: E1128 12:19:10.394811 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:19:25 crc kubenswrapper[5030]: I1128 12:19:25.394124 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:19:25 crc kubenswrapper[5030]: E1128 12:19:25.396011 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:19:27 crc kubenswrapper[5030]: I1128 12:19:27.020352 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 28 12:19:27 crc kubenswrapper[5030]: I1128 12:19:27.020742 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="a4c4907d-c06e-490b-a02d-49dfc45e62b0" containerName="glance-log" containerID="cri-o://c8f2024c85deea6a69fc3f4ffa2f33f204ed9445295164cf6112961f4705f345" gracePeriod=30 Nov 28 12:19:27 crc kubenswrapper[5030]: I1128 12:19:27.020839 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="a4c4907d-c06e-490b-a02d-49dfc45e62b0" containerName="glance-httpd" containerID="cri-o://0c617586d55514a404e785c6b89e04c5c6d8b6e65ef5462a6f09eaca2a5fc982" gracePeriod=30 Nov 28 12:19:27 crc kubenswrapper[5030]: I1128 12:19:27.273833 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:19:27 crc kubenswrapper[5030]: I1128 12:19:27.274961 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="8b290bf2-1b37-4532-8709-b7d38bdce138" containerName="glance-log" containerID="cri-o://d34f30a9ced201719564cf8eec7f68778a4fa595f93cdac7417d9ff8755304e0" gracePeriod=30 Nov 28 12:19:27 crc kubenswrapper[5030]: I1128 12:19:27.276085 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="8b290bf2-1b37-4532-8709-b7d38bdce138" containerName="glance-httpd" containerID="cri-o://896c0e151b90659c0b95428c7bb5c72a9aa2e5dce3bf9c6cbff6ca87a73cda03" gracePeriod=30 Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.072603 5030 generic.go:334] "Generic (PLEG): container finished" podID="8b290bf2-1b37-4532-8709-b7d38bdce138" containerID="d34f30a9ced201719564cf8eec7f68778a4fa595f93cdac7417d9ff8755304e0" exitCode=143 Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.072733 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"8b290bf2-1b37-4532-8709-b7d38bdce138","Type":"ContainerDied","Data":"d34f30a9ced201719564cf8eec7f68778a4fa595f93cdac7417d9ff8755304e0"} Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.075542 5030 generic.go:334] "Generic (PLEG): container finished" podID="a4c4907d-c06e-490b-a02d-49dfc45e62b0" containerID="c8f2024c85deea6a69fc3f4ffa2f33f204ed9445295164cf6112961f4705f345" exitCode=143 Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.075586 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"a4c4907d-c06e-490b-a02d-49dfc45e62b0","Type":"ContainerDied","Data":"c8f2024c85deea6a69fc3f4ffa2f33f204ed9445295164cf6112961f4705f345"} Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.453834 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-74wtq"] Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.467788 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-74wtq"] Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.514857 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glancecb75-account-delete-f5wt4"] Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.515881 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancecb75-account-delete-f5wt4" Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.572546 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glancecb75-account-delete-f5wt4"] Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.582376 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfc5b\" (UniqueName: \"kubernetes.io/projected/4a411a94-5922-4056-9246-f325ad4111a9-kube-api-access-lfc5b\") pod \"glancecb75-account-delete-f5wt4\" (UID: \"4a411a94-5922-4056-9246-f325ad4111a9\") " pod="glance-kuttl-tests/glancecb75-account-delete-f5wt4" Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.582707 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a411a94-5922-4056-9246-f325ad4111a9-operator-scripts\") pod \"glancecb75-account-delete-f5wt4\" (UID: \"4a411a94-5922-4056-9246-f325ad4111a9\") " pod="glance-kuttl-tests/glancecb75-account-delete-f5wt4" Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.619537 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.620080 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" containerName="glance-log" containerID="cri-o://4a034250df6d23ccc2ceb665e79f8c99eb3cc19b7fd1a31056c597251c6067e6" gracePeriod=30 Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.620356 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" containerName="glance-httpd" containerID="cri-o://8f229126b32f77e8c9728409abb3ef64ed2eb7762e3dde1bbd2d43a3650ce27f" gracePeriod=30 Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.646803 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.647152 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" containerName="glance-log" containerID="cri-o://3619beffb11c6788fc77b0c6c3dbcb59c0da43d4311364acc06abd8289789d51" gracePeriod=30 Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.647217 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" containerName="glance-httpd" containerID="cri-o://013354d21a664bc410882d9cb5ee4b82c3db157cf02ed14d44093697fc3be7fc" gracePeriod=30 Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.685825 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfc5b\" (UniqueName: \"kubernetes.io/projected/4a411a94-5922-4056-9246-f325ad4111a9-kube-api-access-lfc5b\") pod \"glancecb75-account-delete-f5wt4\" (UID: \"4a411a94-5922-4056-9246-f325ad4111a9\") " pod="glance-kuttl-tests/glancecb75-account-delete-f5wt4" Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.685910 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a411a94-5922-4056-9246-f325ad4111a9-operator-scripts\") pod \"glancecb75-account-delete-f5wt4\" (UID: \"4a411a94-5922-4056-9246-f325ad4111a9\") " pod="glance-kuttl-tests/glancecb75-account-delete-f5wt4" Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.686898 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a411a94-5922-4056-9246-f325ad4111a9-operator-scripts\") pod \"glancecb75-account-delete-f5wt4\" (UID: \"4a411a94-5922-4056-9246-f325ad4111a9\") " pod="glance-kuttl-tests/glancecb75-account-delete-f5wt4" Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.708087 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfc5b\" (UniqueName: \"kubernetes.io/projected/4a411a94-5922-4056-9246-f325ad4111a9-kube-api-access-lfc5b\") pod \"glancecb75-account-delete-f5wt4\" (UID: \"4a411a94-5922-4056-9246-f325ad4111a9\") " pod="glance-kuttl-tests/glancecb75-account-delete-f5wt4" Nov 28 12:19:28 crc kubenswrapper[5030]: I1128 12:19:28.832437 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancecb75-account-delete-f5wt4" Nov 28 12:19:29 crc kubenswrapper[5030]: I1128 12:19:29.090299 5030 generic.go:334] "Generic (PLEG): container finished" podID="79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" containerID="3619beffb11c6788fc77b0c6c3dbcb59c0da43d4311364acc06abd8289789d51" exitCode=143 Nov 28 12:19:29 crc kubenswrapper[5030]: I1128 12:19:29.090670 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5","Type":"ContainerDied","Data":"3619beffb11c6788fc77b0c6c3dbcb59c0da43d4311364acc06abd8289789d51"} Nov 28 12:19:29 crc kubenswrapper[5030]: I1128 12:19:29.093955 5030 generic.go:334] "Generic (PLEG): container finished" podID="9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" containerID="4a034250df6d23ccc2ceb665e79f8c99eb3cc19b7fd1a31056c597251c6067e6" exitCode=143 Nov 28 12:19:29 crc kubenswrapper[5030]: I1128 12:19:29.094000 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb","Type":"ContainerDied","Data":"4a034250df6d23ccc2ceb665e79f8c99eb3cc19b7fd1a31056c597251c6067e6"} Nov 28 12:19:29 crc kubenswrapper[5030]: I1128 12:19:29.350867 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glancecb75-account-delete-f5wt4"] Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.104878 5030 generic.go:334] "Generic (PLEG): container finished" podID="4a411a94-5922-4056-9246-f325ad4111a9" containerID="927e0fe2573eceb564ed28d253b7d0df10adfb4c70a395ec3444e2f734d903a5" exitCode=0 Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.104977 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancecb75-account-delete-f5wt4" event={"ID":"4a411a94-5922-4056-9246-f325ad4111a9","Type":"ContainerDied","Data":"927e0fe2573eceb564ed28d253b7d0df10adfb4c70a395ec3444e2f734d903a5"} Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.105361 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancecb75-account-delete-f5wt4" event={"ID":"4a411a94-5922-4056-9246-f325ad4111a9","Type":"ContainerStarted","Data":"4d809dbed64174793ca0b2ac6e35a949455779c3fdc49a7b9b7ff725d3e486a1"} Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.428641 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="512c51fc-55b2-4858-9d28-991826eafff1" path="/var/lib/kubelet/pods/512c51fc-55b2-4858-9d28-991826eafff1/volumes" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.658090 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.744130 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.744270 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-var-locks-brick\") pod \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.744354 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.744545 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-etc-iscsi\") pod \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.744652 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4c4907d-c06e-490b-a02d-49dfc45e62b0-logs\") pod \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.744731 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hks8g\" (UniqueName: \"kubernetes.io/projected/a4c4907d-c06e-490b-a02d-49dfc45e62b0-kube-api-access-hks8g\") pod \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.744799 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4c4907d-c06e-490b-a02d-49dfc45e62b0-scripts\") pod \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.745060 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-sys\") pod \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.745211 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4c4907d-c06e-490b-a02d-49dfc45e62b0-httpd-run\") pod \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.745243 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-dev\") pod \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.745538 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-run\") pod \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.745648 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-lib-modules\") pod \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.745782 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-etc-nvme\") pod \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.745856 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c4907d-c06e-490b-a02d-49dfc45e62b0-config-data\") pod \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\" (UID: \"a4c4907d-c06e-490b-a02d-49dfc45e62b0\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.745985 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4c4907d-c06e-490b-a02d-49dfc45e62b0-logs" (OuterVolumeSpecName: "logs") pod "a4c4907d-c06e-490b-a02d-49dfc45e62b0" (UID: "a4c4907d-c06e-490b-a02d-49dfc45e62b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.746066 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "a4c4907d-c06e-490b-a02d-49dfc45e62b0" (UID: "a4c4907d-c06e-490b-a02d-49dfc45e62b0"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.746100 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "a4c4907d-c06e-490b-a02d-49dfc45e62b0" (UID: "a4c4907d-c06e-490b-a02d-49dfc45e62b0"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.746130 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-run" (OuterVolumeSpecName: "run") pod "a4c4907d-c06e-490b-a02d-49dfc45e62b0" (UID: "a4c4907d-c06e-490b-a02d-49dfc45e62b0"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.746115 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-sys" (OuterVolumeSpecName: "sys") pod "a4c4907d-c06e-490b-a02d-49dfc45e62b0" (UID: "a4c4907d-c06e-490b-a02d-49dfc45e62b0"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.746185 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-dev" (OuterVolumeSpecName: "dev") pod "a4c4907d-c06e-490b-a02d-49dfc45e62b0" (UID: "a4c4907d-c06e-490b-a02d-49dfc45e62b0"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.746253 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a4c4907d-c06e-490b-a02d-49dfc45e62b0" (UID: "a4c4907d-c06e-490b-a02d-49dfc45e62b0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.746278 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "a4c4907d-c06e-490b-a02d-49dfc45e62b0" (UID: "a4c4907d-c06e-490b-a02d-49dfc45e62b0"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.747126 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.747151 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.747166 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.747185 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.747198 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.747212 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.747225 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4c4907d-c06e-490b-a02d-49dfc45e62b0-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.747237 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a4c4907d-c06e-490b-a02d-49dfc45e62b0-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.747652 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4c4907d-c06e-490b-a02d-49dfc45e62b0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a4c4907d-c06e-490b-a02d-49dfc45e62b0" (UID: "a4c4907d-c06e-490b-a02d-49dfc45e62b0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.754022 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4c4907d-c06e-490b-a02d-49dfc45e62b0-kube-api-access-hks8g" (OuterVolumeSpecName: "kube-api-access-hks8g") pod "a4c4907d-c06e-490b-a02d-49dfc45e62b0" (UID: "a4c4907d-c06e-490b-a02d-49dfc45e62b0"). InnerVolumeSpecName "kube-api-access-hks8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.758750 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage18-crc" (OuterVolumeSpecName: "glance") pod "a4c4907d-c06e-490b-a02d-49dfc45e62b0" (UID: "a4c4907d-c06e-490b-a02d-49dfc45e62b0"). InnerVolumeSpecName "local-storage18-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.762458 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance-cache") pod "a4c4907d-c06e-490b-a02d-49dfc45e62b0" (UID: "a4c4907d-c06e-490b-a02d-49dfc45e62b0"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.762523 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4c4907d-c06e-490b-a02d-49dfc45e62b0-scripts" (OuterVolumeSpecName: "scripts") pod "a4c4907d-c06e-490b-a02d-49dfc45e62b0" (UID: "a4c4907d-c06e-490b-a02d-49dfc45e62b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.802659 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4c4907d-c06e-490b-a02d-49dfc45e62b0-config-data" (OuterVolumeSpecName: "config-data") pod "a4c4907d-c06e-490b-a02d-49dfc45e62b0" (UID: "a4c4907d-c06e-490b-a02d-49dfc45e62b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.848529 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c4907d-c06e-490b-a02d-49dfc45e62b0-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.848585 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.848600 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.848612 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hks8g\" (UniqueName: \"kubernetes.io/projected/a4c4907d-c06e-490b-a02d-49dfc45e62b0-kube-api-access-hks8g\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.848625 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4c4907d-c06e-490b-a02d-49dfc45e62b0-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.848632 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a4c4907d-c06e-490b-a02d-49dfc45e62b0-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.864975 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.877364 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage18-crc" (UniqueName: "kubernetes.io/local-volume/local-storage18-crc") on node "crc" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.892737 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.949390 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b290bf2-1b37-4532-8709-b7d38bdce138-httpd-run\") pod \"8b290bf2-1b37-4532-8709-b7d38bdce138\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.949453 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hntl8\" (UniqueName: \"kubernetes.io/projected/8b290bf2-1b37-4532-8709-b7d38bdce138-kube-api-access-hntl8\") pod \"8b290bf2-1b37-4532-8709-b7d38bdce138\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.949517 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-lib-modules\") pod \"8b290bf2-1b37-4532-8709-b7d38bdce138\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.949544 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-etc-iscsi\") pod \"8b290bf2-1b37-4532-8709-b7d38bdce138\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.949571 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-etc-nvme\") pod \"8b290bf2-1b37-4532-8709-b7d38bdce138\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.949651 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-var-locks-brick\") pod \"8b290bf2-1b37-4532-8709-b7d38bdce138\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.949684 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-run\") pod \"8b290bf2-1b37-4532-8709-b7d38bdce138\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.949753 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b290bf2-1b37-4532-8709-b7d38bdce138-config-data\") pod \"8b290bf2-1b37-4532-8709-b7d38bdce138\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.949820 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b290bf2-1b37-4532-8709-b7d38bdce138-logs\") pod \"8b290bf2-1b37-4532-8709-b7d38bdce138\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.949852 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-dev\") pod \"8b290bf2-1b37-4532-8709-b7d38bdce138\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.949875 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"8b290bf2-1b37-4532-8709-b7d38bdce138\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.949918 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b290bf2-1b37-4532-8709-b7d38bdce138-scripts\") pod \"8b290bf2-1b37-4532-8709-b7d38bdce138\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.949958 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-sys\") pod \"8b290bf2-1b37-4532-8709-b7d38bdce138\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.950014 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"8b290bf2-1b37-4532-8709-b7d38bdce138\" (UID: \"8b290bf2-1b37-4532-8709-b7d38bdce138\") " Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.950431 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.950456 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.952816 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "8b290bf2-1b37-4532-8709-b7d38bdce138" (UID: "8b290bf2-1b37-4532-8709-b7d38bdce138"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.952816 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "8b290bf2-1b37-4532-8709-b7d38bdce138" (UID: "8b290bf2-1b37-4532-8709-b7d38bdce138"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.952912 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8b290bf2-1b37-4532-8709-b7d38bdce138" (UID: "8b290bf2-1b37-4532-8709-b7d38bdce138"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.952993 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-run" (OuterVolumeSpecName: "run") pod "8b290bf2-1b37-4532-8709-b7d38bdce138" (UID: "8b290bf2-1b37-4532-8709-b7d38bdce138"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.953001 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "8b290bf2-1b37-4532-8709-b7d38bdce138" (UID: "8b290bf2-1b37-4532-8709-b7d38bdce138"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.953808 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b290bf2-1b37-4532-8709-b7d38bdce138-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8b290bf2-1b37-4532-8709-b7d38bdce138" (UID: "8b290bf2-1b37-4532-8709-b7d38bdce138"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.953834 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b290bf2-1b37-4532-8709-b7d38bdce138-kube-api-access-hntl8" (OuterVolumeSpecName: "kube-api-access-hntl8") pod "8b290bf2-1b37-4532-8709-b7d38bdce138" (UID: "8b290bf2-1b37-4532-8709-b7d38bdce138"). InnerVolumeSpecName "kube-api-access-hntl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.953866 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-dev" (OuterVolumeSpecName: "dev") pod "8b290bf2-1b37-4532-8709-b7d38bdce138" (UID: "8b290bf2-1b37-4532-8709-b7d38bdce138"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.953882 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-sys" (OuterVolumeSpecName: "sys") pod "8b290bf2-1b37-4532-8709-b7d38bdce138" (UID: "8b290bf2-1b37-4532-8709-b7d38bdce138"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.954316 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b290bf2-1b37-4532-8709-b7d38bdce138-logs" (OuterVolumeSpecName: "logs") pod "8b290bf2-1b37-4532-8709-b7d38bdce138" (UID: "8b290bf2-1b37-4532-8709-b7d38bdce138"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.956560 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "8b290bf2-1b37-4532-8709-b7d38bdce138" (UID: "8b290bf2-1b37-4532-8709-b7d38bdce138"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.956596 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance-cache") pod "8b290bf2-1b37-4532-8709-b7d38bdce138" (UID: "8b290bf2-1b37-4532-8709-b7d38bdce138"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.961868 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b290bf2-1b37-4532-8709-b7d38bdce138-scripts" (OuterVolumeSpecName: "scripts") pod "8b290bf2-1b37-4532-8709-b7d38bdce138" (UID: "8b290bf2-1b37-4532-8709-b7d38bdce138"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:19:30 crc kubenswrapper[5030]: I1128 12:19:30.991038 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b290bf2-1b37-4532-8709-b7d38bdce138-config-data" (OuterVolumeSpecName: "config-data") pod "8b290bf2-1b37-4532-8709-b7d38bdce138" (UID: "8b290bf2-1b37-4532-8709-b7d38bdce138"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.052395 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.052436 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b290bf2-1b37-4532-8709-b7d38bdce138-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.052450 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b290bf2-1b37-4532-8709-b7d38bdce138-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.052459 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.052520 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.052543 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b290bf2-1b37-4532-8709-b7d38bdce138-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.052554 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.052571 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.052582 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b290bf2-1b37-4532-8709-b7d38bdce138-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.052592 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hntl8\" (UniqueName: \"kubernetes.io/projected/8b290bf2-1b37-4532-8709-b7d38bdce138-kube-api-access-hntl8\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.052601 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.052610 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.052620 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.052629 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8b290bf2-1b37-4532-8709-b7d38bdce138-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.069239 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.095883 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.117257 5030 generic.go:334] "Generic (PLEG): container finished" podID="a4c4907d-c06e-490b-a02d-49dfc45e62b0" containerID="0c617586d55514a404e785c6b89e04c5c6d8b6e65ef5462a6f09eaca2a5fc982" exitCode=0 Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.117330 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.117362 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"a4c4907d-c06e-490b-a02d-49dfc45e62b0","Type":"ContainerDied","Data":"0c617586d55514a404e785c6b89e04c5c6d8b6e65ef5462a6f09eaca2a5fc982"} Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.117454 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"a4c4907d-c06e-490b-a02d-49dfc45e62b0","Type":"ContainerDied","Data":"72dc4b19ffcefe007eda49308e5e11385769d839657582c40c19563f4cfebc9c"} Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.117525 5030 scope.go:117] "RemoveContainer" containerID="0c617586d55514a404e785c6b89e04c5c6d8b6e65ef5462a6f09eaca2a5fc982" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.121070 5030 generic.go:334] "Generic (PLEG): container finished" podID="8b290bf2-1b37-4532-8709-b7d38bdce138" containerID="896c0e151b90659c0b95428c7bb5c72a9aa2e5dce3bf9c6cbff6ca87a73cda03" exitCode=0 Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.121444 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.122281 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"8b290bf2-1b37-4532-8709-b7d38bdce138","Type":"ContainerDied","Data":"896c0e151b90659c0b95428c7bb5c72a9aa2e5dce3bf9c6cbff6ca87a73cda03"} Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.122342 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"8b290bf2-1b37-4532-8709-b7d38bdce138","Type":"ContainerDied","Data":"22c249f741f3a5e03ae327887f157211862bdb0ae08cbba251f968c5476a9beb"} Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.144116 5030 scope.go:117] "RemoveContainer" containerID="c8f2024c85deea6a69fc3f4ffa2f33f204ed9445295164cf6112961f4705f345" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.155420 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.155460 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.171698 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.173713 5030 scope.go:117] "RemoveContainer" containerID="0c617586d55514a404e785c6b89e04c5c6d8b6e65ef5462a6f09eaca2a5fc982" Nov 28 12:19:31 crc kubenswrapper[5030]: E1128 12:19:31.174491 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c617586d55514a404e785c6b89e04c5c6d8b6e65ef5462a6f09eaca2a5fc982\": container with ID starting with 0c617586d55514a404e785c6b89e04c5c6d8b6e65ef5462a6f09eaca2a5fc982 not found: ID does not exist" containerID="0c617586d55514a404e785c6b89e04c5c6d8b6e65ef5462a6f09eaca2a5fc982" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.174528 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c617586d55514a404e785c6b89e04c5c6d8b6e65ef5462a6f09eaca2a5fc982"} err="failed to get container status \"0c617586d55514a404e785c6b89e04c5c6d8b6e65ef5462a6f09eaca2a5fc982\": rpc error: code = NotFound desc = could not find container \"0c617586d55514a404e785c6b89e04c5c6d8b6e65ef5462a6f09eaca2a5fc982\": container with ID starting with 0c617586d55514a404e785c6b89e04c5c6d8b6e65ef5462a6f09eaca2a5fc982 not found: ID does not exist" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.174559 5030 scope.go:117] "RemoveContainer" containerID="c8f2024c85deea6a69fc3f4ffa2f33f204ed9445295164cf6112961f4705f345" Nov 28 12:19:31 crc kubenswrapper[5030]: E1128 12:19:31.174989 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8f2024c85deea6a69fc3f4ffa2f33f204ed9445295164cf6112961f4705f345\": container with ID starting with c8f2024c85deea6a69fc3f4ffa2f33f204ed9445295164cf6112961f4705f345 not found: ID does not exist" containerID="c8f2024c85deea6a69fc3f4ffa2f33f204ed9445295164cf6112961f4705f345" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.175009 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8f2024c85deea6a69fc3f4ffa2f33f204ed9445295164cf6112961f4705f345"} err="failed to get container status \"c8f2024c85deea6a69fc3f4ffa2f33f204ed9445295164cf6112961f4705f345\": rpc error: code = NotFound desc = could not find container \"c8f2024c85deea6a69fc3f4ffa2f33f204ed9445295164cf6112961f4705f345\": container with ID starting with c8f2024c85deea6a69fc3f4ffa2f33f204ed9445295164cf6112961f4705f345 not found: ID does not exist" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.175103 5030 scope.go:117] "RemoveContainer" containerID="896c0e151b90659c0b95428c7bb5c72a9aa2e5dce3bf9c6cbff6ca87a73cda03" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.184003 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.192246 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.196848 5030 scope.go:117] "RemoveContainer" containerID="d34f30a9ced201719564cf8eec7f68778a4fa595f93cdac7417d9ff8755304e0" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.197735 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.230364 5030 scope.go:117] "RemoveContainer" containerID="896c0e151b90659c0b95428c7bb5c72a9aa2e5dce3bf9c6cbff6ca87a73cda03" Nov 28 12:19:31 crc kubenswrapper[5030]: E1128 12:19:31.231053 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"896c0e151b90659c0b95428c7bb5c72a9aa2e5dce3bf9c6cbff6ca87a73cda03\": container with ID starting with 896c0e151b90659c0b95428c7bb5c72a9aa2e5dce3bf9c6cbff6ca87a73cda03 not found: ID does not exist" containerID="896c0e151b90659c0b95428c7bb5c72a9aa2e5dce3bf9c6cbff6ca87a73cda03" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.231095 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"896c0e151b90659c0b95428c7bb5c72a9aa2e5dce3bf9c6cbff6ca87a73cda03"} err="failed to get container status \"896c0e151b90659c0b95428c7bb5c72a9aa2e5dce3bf9c6cbff6ca87a73cda03\": rpc error: code = NotFound desc = could not find container \"896c0e151b90659c0b95428c7bb5c72a9aa2e5dce3bf9c6cbff6ca87a73cda03\": container with ID starting with 896c0e151b90659c0b95428c7bb5c72a9aa2e5dce3bf9c6cbff6ca87a73cda03 not found: ID does not exist" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.231128 5030 scope.go:117] "RemoveContainer" containerID="d34f30a9ced201719564cf8eec7f68778a4fa595f93cdac7417d9ff8755304e0" Nov 28 12:19:31 crc kubenswrapper[5030]: E1128 12:19:31.232075 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d34f30a9ced201719564cf8eec7f68778a4fa595f93cdac7417d9ff8755304e0\": container with ID starting with d34f30a9ced201719564cf8eec7f68778a4fa595f93cdac7417d9ff8755304e0 not found: ID does not exist" containerID="d34f30a9ced201719564cf8eec7f68778a4fa595f93cdac7417d9ff8755304e0" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.232124 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d34f30a9ced201719564cf8eec7f68778a4fa595f93cdac7417d9ff8755304e0"} err="failed to get container status \"d34f30a9ced201719564cf8eec7f68778a4fa595f93cdac7417d9ff8755304e0\": rpc error: code = NotFound desc = could not find container \"d34f30a9ced201719564cf8eec7f68778a4fa595f93cdac7417d9ff8755304e0\": container with ID starting with d34f30a9ced201719564cf8eec7f68778a4fa595f93cdac7417d9ff8755304e0 not found: ID does not exist" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.366221 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancecb75-account-delete-f5wt4" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.460356 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfc5b\" (UniqueName: \"kubernetes.io/projected/4a411a94-5922-4056-9246-f325ad4111a9-kube-api-access-lfc5b\") pod \"4a411a94-5922-4056-9246-f325ad4111a9\" (UID: \"4a411a94-5922-4056-9246-f325ad4111a9\") " Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.460632 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a411a94-5922-4056-9246-f325ad4111a9-operator-scripts\") pod \"4a411a94-5922-4056-9246-f325ad4111a9\" (UID: \"4a411a94-5922-4056-9246-f325ad4111a9\") " Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.461801 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a411a94-5922-4056-9246-f325ad4111a9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a411a94-5922-4056-9246-f325ad4111a9" (UID: "4a411a94-5922-4056-9246-f325ad4111a9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.463941 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a411a94-5922-4056-9246-f325ad4111a9-kube-api-access-lfc5b" (OuterVolumeSpecName: "kube-api-access-lfc5b") pod "4a411a94-5922-4056-9246-f325ad4111a9" (UID: "4a411a94-5922-4056-9246-f325ad4111a9"). InnerVolumeSpecName "kube-api-access-lfc5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.563023 5030 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a411a94-5922-4056-9246-f325ad4111a9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.563073 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfc5b\" (UniqueName: \"kubernetes.io/projected/4a411a94-5922-4056-9246-f325ad4111a9-kube-api-access-lfc5b\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.814940 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.146:9292/healthcheck\": read tcp 10.217.0.2:40892->10.217.0.146:9292: read: connection reset by peer" Nov 28 12:19:31 crc kubenswrapper[5030]: I1128 12:19:31.815019 5030 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.146:9292/healthcheck\": read tcp 10.217.0.2:40898->10.217.0.146:9292: read: connection reset by peer" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.146538 5030 generic.go:334] "Generic (PLEG): container finished" podID="79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" containerID="013354d21a664bc410882d9cb5ee4b82c3db157cf02ed14d44093697fc3be7fc" exitCode=0 Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.146696 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5","Type":"ContainerDied","Data":"013354d21a664bc410882d9cb5ee4b82c3db157cf02ed14d44093697fc3be7fc"} Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.149048 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancecb75-account-delete-f5wt4" event={"ID":"4a411a94-5922-4056-9246-f325ad4111a9","Type":"ContainerDied","Data":"4d809dbed64174793ca0b2ac6e35a949455779c3fdc49a7b9b7ff725d3e486a1"} Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.149074 5030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d809dbed64174793ca0b2ac6e35a949455779c3fdc49a7b9b7ff725d3e486a1" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.149233 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancecb75-account-delete-f5wt4" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.152186 5030 generic.go:334] "Generic (PLEG): container finished" podID="9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" containerID="8f229126b32f77e8c9728409abb3ef64ed2eb7762e3dde1bbd2d43a3650ce27f" exitCode=0 Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.152248 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb","Type":"ContainerDied","Data":"8f229126b32f77e8c9728409abb3ef64ed2eb7762e3dde1bbd2d43a3650ce27f"} Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.243145 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.249121 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.275799 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.275884 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-config-data\") pod \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.275922 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-httpd-run\") pod \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.275956 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-etc-nvme\") pod \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276001 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nch6r\" (UniqueName: \"kubernetes.io/projected/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-kube-api-access-nch6r\") pod \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276174 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" (UID: "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276259 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-logs\") pod \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276354 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-dev\") pod \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276388 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-logs\") pod \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276425 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-etc-iscsi\") pod \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276460 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-scripts\") pod \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276509 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-lib-modules\") pod \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276559 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-dev\") pod \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276606 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-var-locks-brick\") pod \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276642 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-var-locks-brick\") pod \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276671 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276699 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-etc-iscsi\") pod \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276728 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276757 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-config-data\") pod \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276787 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-run\") pod \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276816 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-sys\") pod \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276504 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" (UID: "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276936 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" (UID: "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.276987 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" (UID: "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.277042 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" (UID: "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.277090 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" (UID: "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.277124 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-logs" (OuterVolumeSpecName: "logs") pod "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" (UID: "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.277455 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-dev" (OuterVolumeSpecName: "dev") pod "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" (UID: "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.277576 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-logs" (OuterVolumeSpecName: "logs") pod "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" (UID: "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.277615 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" (UID: "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.277625 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-dev" (OuterVolumeSpecName: "dev") pod "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" (UID: "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.278010 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-lib-modules\") pod \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.278039 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-run" (OuterVolumeSpecName: "run") pod "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" (UID: "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.278076 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-etc-nvme\") pod \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.278238 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.278306 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-scripts\") pod \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.278084 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-sys" (OuterVolumeSpecName: "sys") pod "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" (UID: "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.278115 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" (UID: "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.278311 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" (UID: "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.278340 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-httpd-run\") pod \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\" (UID: \"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.278402 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-sys\") pod \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.278432 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-run\") pod \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.278513 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c55f\" (UniqueName: \"kubernetes.io/projected/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-kube-api-access-7c55f\") pod \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\" (UID: \"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5\") " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.279163 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.279186 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.279202 5030 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-etc-nvme\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.279218 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.279233 5030 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-logs\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.279249 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.279264 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.279279 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.279295 5030 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-dev\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.279331 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.279349 5030 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-var-locks-brick\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.279364 5030 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-etc-iscsi\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.279378 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.279393 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.279407 5030 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-lib-modules\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.279231 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" (UID: "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.279487 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-sys" (OuterVolumeSpecName: "sys") pod "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" (UID: "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.279527 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-run" (OuterVolumeSpecName: "run") pod "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" (UID: "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.279974 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance-cache") pod "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" (UID: "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.281284 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance-cache") pod "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" (UID: "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.281617 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-kube-api-access-nch6r" (OuterVolumeSpecName: "kube-api-access-nch6r") pod "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" (UID: "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb"). InnerVolumeSpecName "kube-api-access-nch6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.281642 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" (UID: "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.282903 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-kube-api-access-7c55f" (OuterVolumeSpecName: "kube-api-access-7c55f") pod "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" (UID: "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5"). InnerVolumeSpecName "kube-api-access-7c55f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.284090 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-scripts" (OuterVolumeSpecName: "scripts") pod "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" (UID: "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.285888 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage13-crc" (OuterVolumeSpecName: "glance") pod "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" (UID: "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5"). InnerVolumeSpecName "local-storage13-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.306825 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-scripts" (OuterVolumeSpecName: "scripts") pod "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" (UID: "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.322297 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-config-data" (OuterVolumeSpecName: "config-data") pod "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" (UID: "9797b0e7-3a99-4a00-aaec-c8d7b5484fdb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.329232 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-config-data" (OuterVolumeSpecName: "config-data") pod "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" (UID: "79ae9917-8b41-4fd9-a1bc-1bf8b0467da5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.380734 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.380791 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.380808 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.380833 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.380848 5030 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.380861 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.380872 5030 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-sys\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.380884 5030 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-run\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.380898 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c55f\" (UniqueName: \"kubernetes.io/projected/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-kube-api-access-7c55f\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.380924 5030 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.380937 5030 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.380951 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nch6r\" (UniqueName: \"kubernetes.io/projected/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb-kube-api-access-nch6r\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.380989 5030 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.393687 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.396170 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage13-crc" (UniqueName: "kubernetes.io/local-volume/local-storage13-crc") on node "crc" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.397690 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.403973 5030 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.408157 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b290bf2-1b37-4532-8709-b7d38bdce138" path="/var/lib/kubelet/pods/8b290bf2-1b37-4532-8709-b7d38bdce138/volumes" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.410086 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4c4907d-c06e-490b-a02d-49dfc45e62b0" path="/var/lib/kubelet/pods/a4c4907d-c06e-490b-a02d-49dfc45e62b0/volumes" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.482623 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.482663 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.482678 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:32 crc kubenswrapper[5030]: I1128 12:19:32.482690 5030 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 28 12:19:33 crc kubenswrapper[5030]: I1128 12:19:33.168609 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Nov 28 12:19:33 crc kubenswrapper[5030]: I1128 12:19:33.169040 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"9797b0e7-3a99-4a00-aaec-c8d7b5484fdb","Type":"ContainerDied","Data":"c7d982a7bd7d67d7afb10291efec11b7cecf7407f2b6c56abd64b4caa9277196"} Nov 28 12:19:33 crc kubenswrapper[5030]: I1128 12:19:33.169203 5030 scope.go:117] "RemoveContainer" containerID="8f229126b32f77e8c9728409abb3ef64ed2eb7762e3dde1bbd2d43a3650ce27f" Nov 28 12:19:33 crc kubenswrapper[5030]: I1128 12:19:33.171973 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"79ae9917-8b41-4fd9-a1bc-1bf8b0467da5","Type":"ContainerDied","Data":"6e99844c9d79b3632a29d577ec05fc9ad9e58424e834bff04ee70172afe881da"} Nov 28 12:19:33 crc kubenswrapper[5030]: I1128 12:19:33.172069 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Nov 28 12:19:33 crc kubenswrapper[5030]: I1128 12:19:33.208729 5030 scope.go:117] "RemoveContainer" containerID="4a034250df6d23ccc2ceb665e79f8c99eb3cc19b7fd1a31056c597251c6067e6" Nov 28 12:19:33 crc kubenswrapper[5030]: I1128 12:19:33.219623 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:19:33 crc kubenswrapper[5030]: I1128 12:19:33.243343 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Nov 28 12:19:33 crc kubenswrapper[5030]: I1128 12:19:33.254458 5030 scope.go:117] "RemoveContainer" containerID="013354d21a664bc410882d9cb5ee4b82c3db157cf02ed14d44093697fc3be7fc" Nov 28 12:19:33 crc kubenswrapper[5030]: I1128 12:19:33.256690 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:19:33 crc kubenswrapper[5030]: I1128 12:19:33.273355 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Nov 28 12:19:33 crc kubenswrapper[5030]: I1128 12:19:33.294709 5030 scope.go:117] "RemoveContainer" containerID="3619beffb11c6788fc77b0c6c3dbcb59c0da43d4311364acc06abd8289789d51" Nov 28 12:19:33 crc kubenswrapper[5030]: I1128 12:19:33.535041 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-bdjls"] Nov 28 12:19:33 crc kubenswrapper[5030]: I1128 12:19:33.540213 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-bdjls"] Nov 28 12:19:33 crc kubenswrapper[5030]: I1128 12:19:33.556060 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glancecb75-account-delete-f5wt4"] Nov 28 12:19:33 crc kubenswrapper[5030]: I1128 12:19:33.561245 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-cb75-account-create-update-69wqw"] Nov 28 12:19:33 crc kubenswrapper[5030]: I1128 12:19:33.566271 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-cb75-account-create-update-69wqw"] Nov 28 12:19:33 crc kubenswrapper[5030]: I1128 12:19:33.571196 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glancecb75-account-delete-f5wt4"] Nov 28 12:19:34 crc kubenswrapper[5030]: I1128 12:19:34.407964 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b577ff1-dc09-42d5-95b4-41d690941740" path="/var/lib/kubelet/pods/1b577ff1-dc09-42d5-95b4-41d690941740/volumes" Nov 28 12:19:34 crc kubenswrapper[5030]: I1128 12:19:34.409574 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2731a8d0-e56b-4f9a-a63c-60e93134be84" path="/var/lib/kubelet/pods/2731a8d0-e56b-4f9a-a63c-60e93134be84/volumes" Nov 28 12:19:34 crc kubenswrapper[5030]: I1128 12:19:34.410620 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a411a94-5922-4056-9246-f325ad4111a9" path="/var/lib/kubelet/pods/4a411a94-5922-4056-9246-f325ad4111a9/volumes" Nov 28 12:19:34 crc kubenswrapper[5030]: I1128 12:19:34.413014 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" path="/var/lib/kubelet/pods/79ae9917-8b41-4fd9-a1bc-1bf8b0467da5/volumes" Nov 28 12:19:34 crc kubenswrapper[5030]: I1128 12:19:34.415643 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" path="/var/lib/kubelet/pods/9797b0e7-3a99-4a00-aaec-c8d7b5484fdb/volumes" Nov 28 12:19:36 crc kubenswrapper[5030]: I1128 12:19:36.393517 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:19:36 crc kubenswrapper[5030]: E1128 12:19:36.394250 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:19:50 crc kubenswrapper[5030]: I1128 12:19:50.394025 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:19:50 crc kubenswrapper[5030]: E1128 12:19:50.395435 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.704339 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sg45s"] Nov 28 12:19:52 crc kubenswrapper[5030]: E1128 12:19:52.705005 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b290bf2-1b37-4532-8709-b7d38bdce138" containerName="glance-httpd" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.705035 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b290bf2-1b37-4532-8709-b7d38bdce138" containerName="glance-httpd" Nov 28 12:19:52 crc kubenswrapper[5030]: E1128 12:19:52.705067 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c4907d-c06e-490b-a02d-49dfc45e62b0" containerName="glance-httpd" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.705081 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c4907d-c06e-490b-a02d-49dfc45e62b0" containerName="glance-httpd" Nov 28 12:19:52 crc kubenswrapper[5030]: E1128 12:19:52.705102 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" containerName="glance-httpd" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.705116 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" containerName="glance-httpd" Nov 28 12:19:52 crc kubenswrapper[5030]: E1128 12:19:52.705143 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" containerName="glance-log" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.705156 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" containerName="glance-log" Nov 28 12:19:52 crc kubenswrapper[5030]: E1128 12:19:52.705177 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" containerName="glance-httpd" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.705190 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" containerName="glance-httpd" Nov 28 12:19:52 crc kubenswrapper[5030]: E1128 12:19:52.705211 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c4907d-c06e-490b-a02d-49dfc45e62b0" containerName="glance-log" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.705223 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c4907d-c06e-490b-a02d-49dfc45e62b0" containerName="glance-log" Nov 28 12:19:52 crc kubenswrapper[5030]: E1128 12:19:52.705260 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" containerName="glance-log" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.705273 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" containerName="glance-log" Nov 28 12:19:52 crc kubenswrapper[5030]: E1128 12:19:52.705303 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a411a94-5922-4056-9246-f325ad4111a9" containerName="mariadb-account-delete" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.705315 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a411a94-5922-4056-9246-f325ad4111a9" containerName="mariadb-account-delete" Nov 28 12:19:52 crc kubenswrapper[5030]: E1128 12:19:52.705352 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b290bf2-1b37-4532-8709-b7d38bdce138" containerName="glance-log" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.705364 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b290bf2-1b37-4532-8709-b7d38bdce138" containerName="glance-log" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.705717 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4c4907d-c06e-490b-a02d-49dfc45e62b0" containerName="glance-log" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.705754 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" containerName="glance-log" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.705781 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b290bf2-1b37-4532-8709-b7d38bdce138" containerName="glance-httpd" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.705807 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" containerName="glance-log" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.705834 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="9797b0e7-3a99-4a00-aaec-c8d7b5484fdb" containerName="glance-httpd" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.705864 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b290bf2-1b37-4532-8709-b7d38bdce138" containerName="glance-log" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.705888 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="79ae9917-8b41-4fd9-a1bc-1bf8b0467da5" containerName="glance-httpd" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.705915 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4c4907d-c06e-490b-a02d-49dfc45e62b0" containerName="glance-httpd" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.705936 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a411a94-5922-4056-9246-f325ad4111a9" containerName="mariadb-account-delete" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.707873 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sg45s" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.732693 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sg45s"] Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.886817 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9c6b5c-1946-4d3d-b861-f5f036908714-utilities\") pod \"certified-operators-sg45s\" (UID: \"2b9c6b5c-1946-4d3d-b861-f5f036908714\") " pod="openshift-marketplace/certified-operators-sg45s" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.886884 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt9m5\" (UniqueName: \"kubernetes.io/projected/2b9c6b5c-1946-4d3d-b861-f5f036908714-kube-api-access-bt9m5\") pod \"certified-operators-sg45s\" (UID: \"2b9c6b5c-1946-4d3d-b861-f5f036908714\") " pod="openshift-marketplace/certified-operators-sg45s" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.886932 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9c6b5c-1946-4d3d-b861-f5f036908714-catalog-content\") pod \"certified-operators-sg45s\" (UID: \"2b9c6b5c-1946-4d3d-b861-f5f036908714\") " pod="openshift-marketplace/certified-operators-sg45s" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.989648 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9c6b5c-1946-4d3d-b861-f5f036908714-utilities\") pod \"certified-operators-sg45s\" (UID: \"2b9c6b5c-1946-4d3d-b861-f5f036908714\") " pod="openshift-marketplace/certified-operators-sg45s" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.990159 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt9m5\" (UniqueName: \"kubernetes.io/projected/2b9c6b5c-1946-4d3d-b861-f5f036908714-kube-api-access-bt9m5\") pod \"certified-operators-sg45s\" (UID: \"2b9c6b5c-1946-4d3d-b861-f5f036908714\") " pod="openshift-marketplace/certified-operators-sg45s" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.990278 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9c6b5c-1946-4d3d-b861-f5f036908714-utilities\") pod \"certified-operators-sg45s\" (UID: \"2b9c6b5c-1946-4d3d-b861-f5f036908714\") " pod="openshift-marketplace/certified-operators-sg45s" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.990429 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9c6b5c-1946-4d3d-b861-f5f036908714-catalog-content\") pod \"certified-operators-sg45s\" (UID: \"2b9c6b5c-1946-4d3d-b861-f5f036908714\") " pod="openshift-marketplace/certified-operators-sg45s" Nov 28 12:19:52 crc kubenswrapper[5030]: I1128 12:19:52.990681 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9c6b5c-1946-4d3d-b861-f5f036908714-catalog-content\") pod \"certified-operators-sg45s\" (UID: \"2b9c6b5c-1946-4d3d-b861-f5f036908714\") " pod="openshift-marketplace/certified-operators-sg45s" Nov 28 12:19:53 crc kubenswrapper[5030]: I1128 12:19:53.015511 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt9m5\" (UniqueName: \"kubernetes.io/projected/2b9c6b5c-1946-4d3d-b861-f5f036908714-kube-api-access-bt9m5\") pod \"certified-operators-sg45s\" (UID: \"2b9c6b5c-1946-4d3d-b861-f5f036908714\") " pod="openshift-marketplace/certified-operators-sg45s" Nov 28 12:19:53 crc kubenswrapper[5030]: I1128 12:19:53.061706 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sg45s" Nov 28 12:19:53 crc kubenswrapper[5030]: I1128 12:19:53.533450 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sg45s"] Nov 28 12:19:54 crc kubenswrapper[5030]: I1128 12:19:54.412837 5030 generic.go:334] "Generic (PLEG): container finished" podID="2b9c6b5c-1946-4d3d-b861-f5f036908714" containerID="3b024924346a26f667107aeb19ea780e3ee1e1d316e3d34e8be5ff26eb170d4c" exitCode=0 Nov 28 12:19:54 crc kubenswrapper[5030]: I1128 12:19:54.416971 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sg45s" event={"ID":"2b9c6b5c-1946-4d3d-b861-f5f036908714","Type":"ContainerDied","Data":"3b024924346a26f667107aeb19ea780e3ee1e1d316e3d34e8be5ff26eb170d4c"} Nov 28 12:19:54 crc kubenswrapper[5030]: I1128 12:19:54.417087 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sg45s" event={"ID":"2b9c6b5c-1946-4d3d-b861-f5f036908714","Type":"ContainerStarted","Data":"f0e8f28a278dd70f5199697917345e773fd6ebba555de9f719fac3e71be46e5a"} Nov 28 12:19:56 crc kubenswrapper[5030]: I1128 12:19:56.446394 5030 generic.go:334] "Generic (PLEG): container finished" podID="2b9c6b5c-1946-4d3d-b861-f5f036908714" containerID="f6364ca0e4e45fda01c17964177771b71fcea50b7a4a803243c8079f23efb783" exitCode=0 Nov 28 12:19:56 crc kubenswrapper[5030]: I1128 12:19:56.446509 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sg45s" event={"ID":"2b9c6b5c-1946-4d3d-b861-f5f036908714","Type":"ContainerDied","Data":"f6364ca0e4e45fda01c17964177771b71fcea50b7a4a803243c8079f23efb783"} Nov 28 12:19:57 crc kubenswrapper[5030]: I1128 12:19:57.461727 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sg45s" event={"ID":"2b9c6b5c-1946-4d3d-b861-f5f036908714","Type":"ContainerStarted","Data":"19af48588cd1b078aabd742e07a0ed656a0f575a66c45f39de40431fe2f8afe9"} Nov 28 12:20:03 crc kubenswrapper[5030]: I1128 12:20:03.061994 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sg45s" Nov 28 12:20:03 crc kubenswrapper[5030]: I1128 12:20:03.063003 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sg45s" Nov 28 12:20:03 crc kubenswrapper[5030]: I1128 12:20:03.124266 5030 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sg45s" Nov 28 12:20:03 crc kubenswrapper[5030]: I1128 12:20:03.163333 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sg45s" podStartSLOduration=8.755464915 podStartE2EDuration="11.163303093s" podCreationTimestamp="2025-11-28 12:19:52 +0000 UTC" firstStartedPulling="2025-11-28 12:19:54.419300527 +0000 UTC m=+1612.361043240" lastFinishedPulling="2025-11-28 12:19:56.827138695 +0000 UTC m=+1614.768881418" observedRunningTime="2025-11-28 12:19:57.486862516 +0000 UTC m=+1615.428605209" watchObservedRunningTime="2025-11-28 12:20:03.163303093 +0000 UTC m=+1621.105045806" Nov 28 12:20:03 crc kubenswrapper[5030]: I1128 12:20:03.393859 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:20:03 crc kubenswrapper[5030]: E1128 12:20:03.394278 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:20:03 crc kubenswrapper[5030]: I1128 12:20:03.584181 5030 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sg45s" Nov 28 12:20:03 crc kubenswrapper[5030]: I1128 12:20:03.658240 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sg45s"] Nov 28 12:20:03 crc kubenswrapper[5030]: I1128 12:20:03.840734 5030 scope.go:117] "RemoveContainer" containerID="29282ecd553125ddf2a32ee18e61e61ec54e77a99eee8b11f63bf5fd4b3ab22b" Nov 28 12:20:05 crc kubenswrapper[5030]: I1128 12:20:05.553531 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sg45s" podUID="2b9c6b5c-1946-4d3d-b861-f5f036908714" containerName="registry-server" containerID="cri-o://19af48588cd1b078aabd742e07a0ed656a0f575a66c45f39de40431fe2f8afe9" gracePeriod=2 Nov 28 12:20:05 crc kubenswrapper[5030]: I1128 12:20:05.998543 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sg45s" Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.161421 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9c6b5c-1946-4d3d-b861-f5f036908714-utilities\") pod \"2b9c6b5c-1946-4d3d-b861-f5f036908714\" (UID: \"2b9c6b5c-1946-4d3d-b861-f5f036908714\") " Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.161694 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt9m5\" (UniqueName: \"kubernetes.io/projected/2b9c6b5c-1946-4d3d-b861-f5f036908714-kube-api-access-bt9m5\") pod \"2b9c6b5c-1946-4d3d-b861-f5f036908714\" (UID: \"2b9c6b5c-1946-4d3d-b861-f5f036908714\") " Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.161773 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9c6b5c-1946-4d3d-b861-f5f036908714-catalog-content\") pod \"2b9c6b5c-1946-4d3d-b861-f5f036908714\" (UID: \"2b9c6b5c-1946-4d3d-b861-f5f036908714\") " Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.170804 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b9c6b5c-1946-4d3d-b861-f5f036908714-utilities" (OuterVolumeSpecName: "utilities") pod "2b9c6b5c-1946-4d3d-b861-f5f036908714" (UID: "2b9c6b5c-1946-4d3d-b861-f5f036908714"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.176688 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b9c6b5c-1946-4d3d-b861-f5f036908714-kube-api-access-bt9m5" (OuterVolumeSpecName: "kube-api-access-bt9m5") pod "2b9c6b5c-1946-4d3d-b861-f5f036908714" (UID: "2b9c6b5c-1946-4d3d-b861-f5f036908714"). InnerVolumeSpecName "kube-api-access-bt9m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.264073 5030 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9c6b5c-1946-4d3d-b861-f5f036908714-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.264136 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt9m5\" (UniqueName: \"kubernetes.io/projected/2b9c6b5c-1946-4d3d-b861-f5f036908714-kube-api-access-bt9m5\") on node \"crc\" DevicePath \"\"" Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.298327 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b9c6b5c-1946-4d3d-b861-f5f036908714-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b9c6b5c-1946-4d3d-b861-f5f036908714" (UID: "2b9c6b5c-1946-4d3d-b861-f5f036908714"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.366022 5030 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9c6b5c-1946-4d3d-b861-f5f036908714-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.574658 5030 generic.go:334] "Generic (PLEG): container finished" podID="2b9c6b5c-1946-4d3d-b861-f5f036908714" containerID="19af48588cd1b078aabd742e07a0ed656a0f575a66c45f39de40431fe2f8afe9" exitCode=0 Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.574836 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sg45s" Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.576261 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sg45s" event={"ID":"2b9c6b5c-1946-4d3d-b861-f5f036908714","Type":"ContainerDied","Data":"19af48588cd1b078aabd742e07a0ed656a0f575a66c45f39de40431fe2f8afe9"} Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.576450 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sg45s" event={"ID":"2b9c6b5c-1946-4d3d-b861-f5f036908714","Type":"ContainerDied","Data":"f0e8f28a278dd70f5199697917345e773fd6ebba555de9f719fac3e71be46e5a"} Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.576532 5030 scope.go:117] "RemoveContainer" containerID="19af48588cd1b078aabd742e07a0ed656a0f575a66c45f39de40431fe2f8afe9" Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.611764 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sg45s"] Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.623805 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sg45s"] Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.632787 5030 scope.go:117] "RemoveContainer" containerID="f6364ca0e4e45fda01c17964177771b71fcea50b7a4a803243c8079f23efb783" Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.671415 5030 scope.go:117] "RemoveContainer" containerID="3b024924346a26f667107aeb19ea780e3ee1e1d316e3d34e8be5ff26eb170d4c" Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.712587 5030 scope.go:117] "RemoveContainer" containerID="19af48588cd1b078aabd742e07a0ed656a0f575a66c45f39de40431fe2f8afe9" Nov 28 12:20:06 crc kubenswrapper[5030]: E1128 12:20:06.713412 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19af48588cd1b078aabd742e07a0ed656a0f575a66c45f39de40431fe2f8afe9\": container with ID starting with 19af48588cd1b078aabd742e07a0ed656a0f575a66c45f39de40431fe2f8afe9 not found: ID does not exist" containerID="19af48588cd1b078aabd742e07a0ed656a0f575a66c45f39de40431fe2f8afe9" Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.713541 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19af48588cd1b078aabd742e07a0ed656a0f575a66c45f39de40431fe2f8afe9"} err="failed to get container status \"19af48588cd1b078aabd742e07a0ed656a0f575a66c45f39de40431fe2f8afe9\": rpc error: code = NotFound desc = could not find container \"19af48588cd1b078aabd742e07a0ed656a0f575a66c45f39de40431fe2f8afe9\": container with ID starting with 19af48588cd1b078aabd742e07a0ed656a0f575a66c45f39de40431fe2f8afe9 not found: ID does not exist" Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.713615 5030 scope.go:117] "RemoveContainer" containerID="f6364ca0e4e45fda01c17964177771b71fcea50b7a4a803243c8079f23efb783" Nov 28 12:20:06 crc kubenswrapper[5030]: E1128 12:20:06.714258 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6364ca0e4e45fda01c17964177771b71fcea50b7a4a803243c8079f23efb783\": container with ID starting with f6364ca0e4e45fda01c17964177771b71fcea50b7a4a803243c8079f23efb783 not found: ID does not exist" containerID="f6364ca0e4e45fda01c17964177771b71fcea50b7a4a803243c8079f23efb783" Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.714329 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6364ca0e4e45fda01c17964177771b71fcea50b7a4a803243c8079f23efb783"} err="failed to get container status \"f6364ca0e4e45fda01c17964177771b71fcea50b7a4a803243c8079f23efb783\": rpc error: code = NotFound desc = could not find container \"f6364ca0e4e45fda01c17964177771b71fcea50b7a4a803243c8079f23efb783\": container with ID starting with f6364ca0e4e45fda01c17964177771b71fcea50b7a4a803243c8079f23efb783 not found: ID does not exist" Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.714370 5030 scope.go:117] "RemoveContainer" containerID="3b024924346a26f667107aeb19ea780e3ee1e1d316e3d34e8be5ff26eb170d4c" Nov 28 12:20:06 crc kubenswrapper[5030]: E1128 12:20:06.714865 5030 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b024924346a26f667107aeb19ea780e3ee1e1d316e3d34e8be5ff26eb170d4c\": container with ID starting with 3b024924346a26f667107aeb19ea780e3ee1e1d316e3d34e8be5ff26eb170d4c not found: ID does not exist" containerID="3b024924346a26f667107aeb19ea780e3ee1e1d316e3d34e8be5ff26eb170d4c" Nov 28 12:20:06 crc kubenswrapper[5030]: I1128 12:20:06.714964 5030 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b024924346a26f667107aeb19ea780e3ee1e1d316e3d34e8be5ff26eb170d4c"} err="failed to get container status \"3b024924346a26f667107aeb19ea780e3ee1e1d316e3d34e8be5ff26eb170d4c\": rpc error: code = NotFound desc = could not find container \"3b024924346a26f667107aeb19ea780e3ee1e1d316e3d34e8be5ff26eb170d4c\": container with ID starting with 3b024924346a26f667107aeb19ea780e3ee1e1d316e3d34e8be5ff26eb170d4c not found: ID does not exist" Nov 28 12:20:08 crc kubenswrapper[5030]: I1128 12:20:08.424919 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b9c6b5c-1946-4d3d-b861-f5f036908714" path="/var/lib/kubelet/pods/2b9c6b5c-1946-4d3d-b861-f5f036908714/volumes" Nov 28 12:20:10 crc kubenswrapper[5030]: I1128 12:20:10.082823 5030 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ddkqf/must-gather-snk7m"] Nov 28 12:20:10 crc kubenswrapper[5030]: E1128 12:20:10.083227 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9c6b5c-1946-4d3d-b861-f5f036908714" containerName="extract-utilities" Nov 28 12:20:10 crc kubenswrapper[5030]: I1128 12:20:10.083243 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9c6b5c-1946-4d3d-b861-f5f036908714" containerName="extract-utilities" Nov 28 12:20:10 crc kubenswrapper[5030]: E1128 12:20:10.083265 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9c6b5c-1946-4d3d-b861-f5f036908714" containerName="registry-server" Nov 28 12:20:10 crc kubenswrapper[5030]: I1128 12:20:10.083274 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9c6b5c-1946-4d3d-b861-f5f036908714" containerName="registry-server" Nov 28 12:20:10 crc kubenswrapper[5030]: E1128 12:20:10.083302 5030 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9c6b5c-1946-4d3d-b861-f5f036908714" containerName="extract-content" Nov 28 12:20:10 crc kubenswrapper[5030]: I1128 12:20:10.083313 5030 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9c6b5c-1946-4d3d-b861-f5f036908714" containerName="extract-content" Nov 28 12:20:10 crc kubenswrapper[5030]: I1128 12:20:10.083508 5030 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b9c6b5c-1946-4d3d-b861-f5f036908714" containerName="registry-server" Nov 28 12:20:10 crc kubenswrapper[5030]: I1128 12:20:10.084575 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddkqf/must-gather-snk7m" Nov 28 12:20:10 crc kubenswrapper[5030]: I1128 12:20:10.090629 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ddkqf"/"openshift-service-ca.crt" Nov 28 12:20:10 crc kubenswrapper[5030]: I1128 12:20:10.090924 5030 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ddkqf"/"kube-root-ca.crt" Nov 28 12:20:10 crc kubenswrapper[5030]: I1128 12:20:10.148827 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/01afb59a-3bf5-47f1-8256-b96dd205649a-must-gather-output\") pod \"must-gather-snk7m\" (UID: \"01afb59a-3bf5-47f1-8256-b96dd205649a\") " pod="openshift-must-gather-ddkqf/must-gather-snk7m" Nov 28 12:20:10 crc kubenswrapper[5030]: I1128 12:20:10.148940 5030 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94fn2\" (UniqueName: \"kubernetes.io/projected/01afb59a-3bf5-47f1-8256-b96dd205649a-kube-api-access-94fn2\") pod \"must-gather-snk7m\" (UID: \"01afb59a-3bf5-47f1-8256-b96dd205649a\") " pod="openshift-must-gather-ddkqf/must-gather-snk7m" Nov 28 12:20:10 crc kubenswrapper[5030]: I1128 12:20:10.187299 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ddkqf/must-gather-snk7m"] Nov 28 12:20:10 crc kubenswrapper[5030]: I1128 12:20:10.250437 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/01afb59a-3bf5-47f1-8256-b96dd205649a-must-gather-output\") pod \"must-gather-snk7m\" (UID: \"01afb59a-3bf5-47f1-8256-b96dd205649a\") " pod="openshift-must-gather-ddkqf/must-gather-snk7m" Nov 28 12:20:10 crc kubenswrapper[5030]: I1128 12:20:10.250911 5030 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94fn2\" (UniqueName: \"kubernetes.io/projected/01afb59a-3bf5-47f1-8256-b96dd205649a-kube-api-access-94fn2\") pod \"must-gather-snk7m\" (UID: \"01afb59a-3bf5-47f1-8256-b96dd205649a\") " pod="openshift-must-gather-ddkqf/must-gather-snk7m" Nov 28 12:20:10 crc kubenswrapper[5030]: I1128 12:20:10.250954 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/01afb59a-3bf5-47f1-8256-b96dd205649a-must-gather-output\") pod \"must-gather-snk7m\" (UID: \"01afb59a-3bf5-47f1-8256-b96dd205649a\") " pod="openshift-must-gather-ddkqf/must-gather-snk7m" Nov 28 12:20:10 crc kubenswrapper[5030]: I1128 12:20:10.278016 5030 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94fn2\" (UniqueName: \"kubernetes.io/projected/01afb59a-3bf5-47f1-8256-b96dd205649a-kube-api-access-94fn2\") pod \"must-gather-snk7m\" (UID: \"01afb59a-3bf5-47f1-8256-b96dd205649a\") " pod="openshift-must-gather-ddkqf/must-gather-snk7m" Nov 28 12:20:10 crc kubenswrapper[5030]: I1128 12:20:10.403562 5030 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddkqf/must-gather-snk7m" Nov 28 12:20:10 crc kubenswrapper[5030]: I1128 12:20:10.866374 5030 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ddkqf/must-gather-snk7m"] Nov 28 12:20:11 crc kubenswrapper[5030]: I1128 12:20:11.619260 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ddkqf/must-gather-snk7m" event={"ID":"01afb59a-3bf5-47f1-8256-b96dd205649a","Type":"ContainerStarted","Data":"336a7916b3263adfe2d2363ca9b624eb47e8c4806023b34ef59a40498d810da6"} Nov 28 12:20:16 crc kubenswrapper[5030]: I1128 12:20:16.394005 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:20:16 crc kubenswrapper[5030]: E1128 12:20:16.395279 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:20:16 crc kubenswrapper[5030]: I1128 12:20:16.667026 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ddkqf/must-gather-snk7m" event={"ID":"01afb59a-3bf5-47f1-8256-b96dd205649a","Type":"ContainerStarted","Data":"99d726e78740b3e869d06719097343e4eae36819239ce1dd6911e8475e8b5c17"} Nov 28 12:20:16 crc kubenswrapper[5030]: I1128 12:20:16.667085 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ddkqf/must-gather-snk7m" event={"ID":"01afb59a-3bf5-47f1-8256-b96dd205649a","Type":"ContainerStarted","Data":"da32bfd98af30cfa94dac44fc9a51cfb136ac3804f50c163cd3efe8514409f40"} Nov 28 12:20:16 crc kubenswrapper[5030]: I1128 12:20:16.692294 5030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ddkqf/must-gather-snk7m" podStartSLOduration=2.145652425 podStartE2EDuration="6.692267984s" podCreationTimestamp="2025-11-28 12:20:10 +0000 UTC" firstStartedPulling="2025-11-28 12:20:10.873318304 +0000 UTC m=+1628.815061007" lastFinishedPulling="2025-11-28 12:20:15.419933843 +0000 UTC m=+1633.361676566" observedRunningTime="2025-11-28 12:20:16.686663248 +0000 UTC m=+1634.628405941" watchObservedRunningTime="2025-11-28 12:20:16.692267984 +0000 UTC m=+1634.634010677" Nov 28 12:20:20 crc kubenswrapper[5030]: I1128 12:20:20.065568 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-db-create-9w92v"] Nov 28 12:20:20 crc kubenswrapper[5030]: I1128 12:20:20.080817 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-db-create-9w92v"] Nov 28 12:20:20 crc kubenswrapper[5030]: I1128 12:20:20.405797 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c245622d-f12e-4906-b7e5-180b9dc50229" path="/var/lib/kubelet/pods/c245622d-f12e-4906-b7e5-180b9dc50229/volumes" Nov 28 12:20:21 crc kubenswrapper[5030]: I1128 12:20:21.031939 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9"] Nov 28 12:20:21 crc kubenswrapper[5030]: I1128 12:20:21.041733 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-d06f-account-create-update-dwxq9"] Nov 28 12:20:22 crc kubenswrapper[5030]: I1128 12:20:22.404125 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a579a17-723b-491c-8e33-ce15cb47f3f3" path="/var/lib/kubelet/pods/3a579a17-723b-491c-8e33-ce15cb47f3f3/volumes" Nov 28 12:20:27 crc kubenswrapper[5030]: I1128 12:20:27.393611 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:20:27 crc kubenswrapper[5030]: E1128 12:20:27.396344 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:20:38 crc kubenswrapper[5030]: I1128 12:20:38.393817 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:20:38 crc kubenswrapper[5030]: E1128 12:20:38.394804 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:20:41 crc kubenswrapper[5030]: I1128 12:20:41.049950 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-tn92m"] Nov 28 12:20:41 crc kubenswrapper[5030]: I1128 12:20:41.058377 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-tn92m"] Nov 28 12:20:42 crc kubenswrapper[5030]: I1128 12:20:42.402157 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5" path="/var/lib/kubelet/pods/a14f92e4-8dc5-4fb4-8cf7-8ed25b79ebc5/volumes" Nov 28 12:20:47 crc kubenswrapper[5030]: I1128 12:20:47.041276 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-bwvll"] Nov 28 12:20:47 crc kubenswrapper[5030]: I1128 12:20:47.049700 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-bwvll"] Nov 28 12:20:48 crc kubenswrapper[5030]: I1128 12:20:48.401839 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="482f7ca1-8b55-4a4d-8a78-fe296e1801c0" path="/var/lib/kubelet/pods/482f7ca1-8b55-4a4d-8a78-fe296e1801c0/volumes" Nov 28 12:20:51 crc kubenswrapper[5030]: I1128 12:20:51.392894 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:20:51 crc kubenswrapper[5030]: E1128 12:20:51.393597 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:20:57 crc kubenswrapper[5030]: I1128 12:20:57.242704 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk_188bc9f2-ac35-4a70-a6f2-8d691c351ef8/util/0.log" Nov 28 12:20:57 crc kubenswrapper[5030]: I1128 12:20:57.624620 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk_188bc9f2-ac35-4a70-a6f2-8d691c351ef8/pull/0.log" Nov 28 12:20:57 crc kubenswrapper[5030]: I1128 12:20:57.657152 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk_188bc9f2-ac35-4a70-a6f2-8d691c351ef8/util/0.log" Nov 28 12:20:57 crc kubenswrapper[5030]: I1128 12:20:57.664277 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk_188bc9f2-ac35-4a70-a6f2-8d691c351ef8/pull/0.log" Nov 28 12:20:57 crc kubenswrapper[5030]: I1128 12:20:57.850201 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk_188bc9f2-ac35-4a70-a6f2-8d691c351ef8/util/0.log" Nov 28 12:20:57 crc kubenswrapper[5030]: I1128 12:20:57.871827 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk_188bc9f2-ac35-4a70-a6f2-8d691c351ef8/extract/0.log" Nov 28 12:20:57 crc kubenswrapper[5030]: I1128 12:20:57.884904 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_27e8bc079695f3aed52a6c5be68196d91a6230a1a03a8fc87a19aa534fjjdjk_188bc9f2-ac35-4a70-a6f2-8d691c351ef8/pull/0.log" Nov 28 12:20:58 crc kubenswrapper[5030]: I1128 12:20:58.046552 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp_a3fd10b8-6b32-4a76-80a1-14a3ea9b4985/util/0.log" Nov 28 12:20:58 crc kubenswrapper[5030]: I1128 12:20:58.215148 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp_a3fd10b8-6b32-4a76-80a1-14a3ea9b4985/pull/0.log" Nov 28 12:20:58 crc kubenswrapper[5030]: I1128 12:20:58.228886 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp_a3fd10b8-6b32-4a76-80a1-14a3ea9b4985/pull/0.log" Nov 28 12:20:58 crc kubenswrapper[5030]: I1128 12:20:58.253452 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp_a3fd10b8-6b32-4a76-80a1-14a3ea9b4985/util/0.log" Nov 28 12:20:58 crc kubenswrapper[5030]: I1128 12:20:58.452048 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp_a3fd10b8-6b32-4a76-80a1-14a3ea9b4985/util/0.log" Nov 28 12:20:58 crc kubenswrapper[5030]: I1128 12:20:58.465278 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp_a3fd10b8-6b32-4a76-80a1-14a3ea9b4985/pull/0.log" Nov 28 12:20:58 crc kubenswrapper[5030]: I1128 12:20:58.478429 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5d473c3169f40b179d14921c90af2c8546b7b757fe551b7dba7d903f5dhtrnp_a3fd10b8-6b32-4a76-80a1-14a3ea9b4985/extract/0.log" Nov 28 12:20:58 crc kubenswrapper[5030]: I1128 12:20:58.649869 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb_927808a1-7261-4ddb-961f-302a544cb77c/util/0.log" Nov 28 12:20:58 crc kubenswrapper[5030]: I1128 12:20:58.809508 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb_927808a1-7261-4ddb-961f-302a544cb77c/util/0.log" Nov 28 12:20:58 crc kubenswrapper[5030]: I1128 12:20:58.809791 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb_927808a1-7261-4ddb-961f-302a544cb77c/pull/0.log" Nov 28 12:20:58 crc kubenswrapper[5030]: I1128 12:20:58.816991 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb_927808a1-7261-4ddb-961f-302a544cb77c/pull/0.log" Nov 28 12:20:59 crc kubenswrapper[5030]: I1128 12:20:59.027881 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb_927808a1-7261-4ddb-961f-302a544cb77c/util/0.log" Nov 28 12:20:59 crc kubenswrapper[5030]: I1128 12:20:59.030776 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb_927808a1-7261-4ddb-961f-302a544cb77c/extract/0.log" Nov 28 12:20:59 crc kubenswrapper[5030]: I1128 12:20:59.082109 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_87b4bb7621dcb67338b53778f2871f07aa0e4d3dfcd0fd25724bfd240bhk7pb_927808a1-7261-4ddb-961f-302a544cb77c/pull/0.log" Nov 28 12:20:59 crc kubenswrapper[5030]: I1128 12:20:59.225647 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh_ebc31616-3bb5-4c70-a664-7bbe8152ff83/util/0.log" Nov 28 12:20:59 crc kubenswrapper[5030]: I1128 12:20:59.465875 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh_ebc31616-3bb5-4c70-a664-7bbe8152ff83/pull/0.log" Nov 28 12:20:59 crc kubenswrapper[5030]: I1128 12:20:59.480073 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh_ebc31616-3bb5-4c70-a664-7bbe8152ff83/util/0.log" Nov 28 12:20:59 crc kubenswrapper[5030]: I1128 12:20:59.501523 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh_ebc31616-3bb5-4c70-a664-7bbe8152ff83/pull/0.log" Nov 28 12:20:59 crc kubenswrapper[5030]: I1128 12:20:59.663043 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh_ebc31616-3bb5-4c70-a664-7bbe8152ff83/util/0.log" Nov 28 12:20:59 crc kubenswrapper[5030]: I1128 12:20:59.663843 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh_ebc31616-3bb5-4c70-a664-7bbe8152ff83/extract/0.log" Nov 28 12:20:59 crc kubenswrapper[5030]: I1128 12:20:59.714979 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e59056grh_ebc31616-3bb5-4c70-a664-7bbe8152ff83/pull/0.log" Nov 28 12:20:59 crc kubenswrapper[5030]: I1128 12:20:59.893856 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r_2da742ad-42c7-4812-b7ee-04df6e644c0e/util/0.log" Nov 28 12:21:00 crc kubenswrapper[5030]: I1128 12:21:00.045351 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r_2da742ad-42c7-4812-b7ee-04df6e644c0e/pull/0.log" Nov 28 12:21:00 crc kubenswrapper[5030]: I1128 12:21:00.074642 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r_2da742ad-42c7-4812-b7ee-04df6e644c0e/pull/0.log" Nov 28 12:21:00 crc kubenswrapper[5030]: I1128 12:21:00.094658 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r_2da742ad-42c7-4812-b7ee-04df6e644c0e/util/0.log" Nov 28 12:21:00 crc kubenswrapper[5030]: I1128 12:21:00.275738 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r_2da742ad-42c7-4812-b7ee-04df6e644c0e/extract/0.log" Nov 28 12:21:00 crc kubenswrapper[5030]: I1128 12:21:00.285894 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r_2da742ad-42c7-4812-b7ee-04df6e644c0e/util/0.log" Nov 28 12:21:00 crc kubenswrapper[5030]: I1128 12:21:00.326187 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9f0c59a3968beec894e04476dd5efd0a707bad85f482efd4940498368c6x87r_2da742ad-42c7-4812-b7ee-04df6e644c0e/pull/0.log" Nov 28 12:21:00 crc kubenswrapper[5030]: I1128 12:21:00.328287 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz_22820358-bfdc-4f0f-94fd-a31b149e42ff/util/0.log" Nov 28 12:21:00 crc kubenswrapper[5030]: I1128 12:21:00.543205 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz_22820358-bfdc-4f0f-94fd-a31b149e42ff/pull/0.log" Nov 28 12:21:00 crc kubenswrapper[5030]: I1128 12:21:00.548001 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz_22820358-bfdc-4f0f-94fd-a31b149e42ff/util/0.log" Nov 28 12:21:00 crc kubenswrapper[5030]: I1128 12:21:00.552680 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz_22820358-bfdc-4f0f-94fd-a31b149e42ff/pull/0.log" Nov 28 12:21:00 crc kubenswrapper[5030]: I1128 12:21:00.728703 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz_22820358-bfdc-4f0f-94fd-a31b149e42ff/util/0.log" Nov 28 12:21:00 crc kubenswrapper[5030]: I1128 12:21:00.731727 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz_22820358-bfdc-4f0f-94fd-a31b149e42ff/pull/0.log" Nov 28 12:21:00 crc kubenswrapper[5030]: I1128 12:21:00.768222 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cbebfaa45bc89ca80e62f11a2a5a3c02d16daf97d7e8b91a207d47c93djj5bz_22820358-bfdc-4f0f-94fd-a31b149e42ff/extract/0.log" Nov 28 12:21:00 crc kubenswrapper[5030]: I1128 12:21:00.856404 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp_e4a3b7f5-6933-4be3-ae18-394be8bb4cf6/util/0.log" Nov 28 12:21:01 crc kubenswrapper[5030]: I1128 12:21:01.025491 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp_e4a3b7f5-6933-4be3-ae18-394be8bb4cf6/util/0.log" Nov 28 12:21:01 crc kubenswrapper[5030]: I1128 12:21:01.038322 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp_e4a3b7f5-6933-4be3-ae18-394be8bb4cf6/pull/0.log" Nov 28 12:21:01 crc kubenswrapper[5030]: I1128 12:21:01.041949 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp_e4a3b7f5-6933-4be3-ae18-394be8bb4cf6/pull/0.log" Nov 28 12:21:01 crc kubenswrapper[5030]: I1128 12:21:01.240920 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp_e4a3b7f5-6933-4be3-ae18-394be8bb4cf6/pull/0.log" Nov 28 12:21:01 crc kubenswrapper[5030]: I1128 12:21:01.247989 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp_e4a3b7f5-6933-4be3-ae18-394be8bb4cf6/extract/0.log" Nov 28 12:21:01 crc kubenswrapper[5030]: I1128 12:21:01.254376 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d854280893f664a16f85f7c4268f877fa95509a4e25ae77fea242eaaa3mlmhp_e4a3b7f5-6933-4be3-ae18-394be8bb4cf6/util/0.log" Nov 28 12:21:01 crc kubenswrapper[5030]: I1128 12:21:01.555299 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-index-llnqd_8bfb5317-9e89-460a-b5ae-5d553d2c9eba/registry-server/0.log" Nov 28 12:21:01 crc kubenswrapper[5030]: I1128 12:21:01.565272 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6d74bbdf9d-vnztl_895c6168-d396-4e47-9d84-a5fa7e55eafa/manager/0.log" Nov 28 12:21:01 crc kubenswrapper[5030]: I1128 12:21:01.583901 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-86dcdc6f89-snck4_15d81285-0f77-422a-9189-d17114debbfc/manager/0.log" Nov 28 12:21:01 crc kubenswrapper[5030]: I1128 12:21:01.746532 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-index-js7dn_e3d7cbc8-3d46-4db9-a4b7-5b19f326b476/registry-server/0.log" Nov 28 12:21:01 crc kubenswrapper[5030]: I1128 12:21:01.778862 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58cc75b84f-rp7cr_dc60e0cc-8fdc-4dd8-b191-2f2118e85785/kube-rbac-proxy/0.log" Nov 28 12:21:01 crc kubenswrapper[5030]: I1128 12:21:01.810351 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58cc75b84f-rp7cr_dc60e0cc-8fdc-4dd8-b191-2f2118e85785/manager/0.log" Nov 28 12:21:01 crc kubenswrapper[5030]: I1128 12:21:01.977444 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-index-g8m7z_08a84de3-578b-42c2-8ca8-6ed063ab0d71/registry-server/0.log" Nov 28 12:21:02 crc kubenswrapper[5030]: I1128 12:21:02.122199 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-54f75d97f-lbqxb_c79a1b48-ab32-4ab9-9226-54677c98d72c/manager/0.log" Nov 28 12:21:02 crc kubenswrapper[5030]: I1128 12:21:02.173348 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-index-b8vrk_911d95fe-5fc4-4f07-aa44-f33c853625c6/registry-server/0.log" Nov 28 12:21:02 crc kubenswrapper[5030]: I1128 12:21:02.252943 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7cdbb9546b-2xp4v_4ad0efc8-bb7f-4a51-9ca8-a929626c3a29/manager/0.log" Nov 28 12:21:02 crc kubenswrapper[5030]: I1128 12:21:02.511138 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-index-llltt_c132b5f7-718b-4f93-9589-ae208ff59e29/registry-server/0.log" Nov 28 12:21:02 crc kubenswrapper[5030]: I1128 12:21:02.581176 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-779fc9694b-s9jkg_b55b7cc9-5974-46a7-b685-252d63a2ada3/operator/0.log" Nov 28 12:21:02 crc kubenswrapper[5030]: I1128 12:21:02.608409 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-index-78v5c_a40eab08-1542-4a5a-b92f-ad99f4a6e6a3/registry-server/0.log" Nov 28 12:21:02 crc kubenswrapper[5030]: I1128 12:21:02.726689 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7d968d985d-jqp2z_8840337f-4675-46f8-b78b-5097e685fe53/manager/0.log" Nov 28 12:21:02 crc kubenswrapper[5030]: I1128 12:21:02.859280 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-index-njjsd_e2cb884d-70c4-4134-a38f-866f4650a9bb/registry-server/0.log" Nov 28 12:21:04 crc kubenswrapper[5030]: I1128 12:21:04.037039 5030 scope.go:117] "RemoveContainer" containerID="30ccf97afc548aaa8f7923a9c115f9ec3e13b9c2ee241a9160d46f7a3a95867e" Nov 28 12:21:04 crc kubenswrapper[5030]: I1128 12:21:04.092178 5030 scope.go:117] "RemoveContainer" containerID="598c65f16c59bc89f2f9b1657ed8c7dfb935d27b0334227463f1772236905ccf" Nov 28 12:21:04 crc kubenswrapper[5030]: I1128 12:21:04.127996 5030 scope.go:117] "RemoveContainer" containerID="493149401bbd1a6501e7e20268a99fffd594bc9c29858532cebe95d26a471967" Nov 28 12:21:04 crc kubenswrapper[5030]: I1128 12:21:04.175682 5030 scope.go:117] "RemoveContainer" containerID="71687a7b6f937d3e01d783eb2448ed6ae33971b2af2304afc67a57484aaf3c4e" Nov 28 12:21:04 crc kubenswrapper[5030]: I1128 12:21:04.206927 5030 scope.go:117] "RemoveContainer" containerID="b6912f3ec6269a8e89dda6c11fd5325ef3e7e60619ee0487d072caa53376985c" Nov 28 12:21:04 crc kubenswrapper[5030]: I1128 12:21:04.230008 5030 scope.go:117] "RemoveContainer" containerID="6ecb90ac5a53babfe41221a56583aee9b7636c3da4eef4e5c69c02da4c2972a8" Nov 28 12:21:04 crc kubenswrapper[5030]: I1128 12:21:04.278342 5030 scope.go:117] "RemoveContainer" containerID="4ff6f5fa051a0da9b3e269845c483428d30976b43f619d6e1870f99d46493c67" Nov 28 12:21:04 crc kubenswrapper[5030]: I1128 12:21:04.309809 5030 scope.go:117] "RemoveContainer" containerID="e6dfc2ac5429186f0d3b257f50a065f702c63fe8e77c8d0396e4240cca32561a" Nov 28 12:21:04 crc kubenswrapper[5030]: I1128 12:21:04.338401 5030 scope.go:117] "RemoveContainer" containerID="daf3208436221dcb518e66afdbe3765adf19f2378b5bd682dbddcf960656e412" Nov 28 12:21:04 crc kubenswrapper[5030]: I1128 12:21:04.393715 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:21:04 crc kubenswrapper[5030]: E1128 12:21:04.394156 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:21:19 crc kubenswrapper[5030]: I1128 12:21:19.392792 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:21:19 crc kubenswrapper[5030]: E1128 12:21:19.393763 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:21:20 crc kubenswrapper[5030]: I1128 12:21:20.412803 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-47vf7_d077d777-7c83-42d3-9c90-b9155040a1ea/control-plane-machine-set-operator/0.log" Nov 28 12:21:20 crc kubenswrapper[5030]: I1128 12:21:20.578184 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bdbjw_a5c19601-52c5-40bd-8640-3fd0128e7b6a/kube-rbac-proxy/0.log" Nov 28 12:21:20 crc kubenswrapper[5030]: I1128 12:21:20.583865 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bdbjw_a5c19601-52c5-40bd-8640-3fd0128e7b6a/machine-api-operator/0.log" Nov 28 12:21:32 crc kubenswrapper[5030]: I1128 12:21:32.397875 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:21:32 crc kubenswrapper[5030]: E1128 12:21:32.398816 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:21:38 crc kubenswrapper[5030]: I1128 12:21:38.702225 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-h6zgg_be02d333-c255-4eae-91d6-14dff16fd95f/kube-rbac-proxy/0.log" Nov 28 12:21:38 crc kubenswrapper[5030]: I1128 12:21:38.780308 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-h6zgg_be02d333-c255-4eae-91d6-14dff16fd95f/controller/0.log" Nov 28 12:21:38 crc kubenswrapper[5030]: I1128 12:21:38.903246 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vg7xg_032fe48e-074f-4471-80f1-940c9a22e1b3/cp-frr-files/0.log" Nov 28 12:21:39 crc kubenswrapper[5030]: I1128 12:21:39.082682 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vg7xg_032fe48e-074f-4471-80f1-940c9a22e1b3/cp-frr-files/0.log" Nov 28 12:21:39 crc kubenswrapper[5030]: I1128 12:21:39.091713 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vg7xg_032fe48e-074f-4471-80f1-940c9a22e1b3/cp-reloader/0.log" Nov 28 12:21:39 crc kubenswrapper[5030]: I1128 12:21:39.092079 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vg7xg_032fe48e-074f-4471-80f1-940c9a22e1b3/cp-reloader/0.log" Nov 28 12:21:39 crc kubenswrapper[5030]: I1128 12:21:39.111364 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vg7xg_032fe48e-074f-4471-80f1-940c9a22e1b3/cp-metrics/0.log" Nov 28 12:21:39 crc kubenswrapper[5030]: I1128 12:21:39.340285 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vg7xg_032fe48e-074f-4471-80f1-940c9a22e1b3/cp-metrics/0.log" Nov 28 12:21:39 crc kubenswrapper[5030]: I1128 12:21:39.347711 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vg7xg_032fe48e-074f-4471-80f1-940c9a22e1b3/cp-metrics/0.log" Nov 28 12:21:39 crc kubenswrapper[5030]: I1128 12:21:39.382678 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vg7xg_032fe48e-074f-4471-80f1-940c9a22e1b3/cp-reloader/0.log" Nov 28 12:21:39 crc kubenswrapper[5030]: I1128 12:21:39.402914 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vg7xg_032fe48e-074f-4471-80f1-940c9a22e1b3/cp-frr-files/0.log" Nov 28 12:21:39 crc kubenswrapper[5030]: I1128 12:21:39.575210 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vg7xg_032fe48e-074f-4471-80f1-940c9a22e1b3/cp-reloader/0.log" Nov 28 12:21:39 crc kubenswrapper[5030]: I1128 12:21:39.590370 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vg7xg_032fe48e-074f-4471-80f1-940c9a22e1b3/cp-frr-files/0.log" Nov 28 12:21:39 crc kubenswrapper[5030]: I1128 12:21:39.609691 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vg7xg_032fe48e-074f-4471-80f1-940c9a22e1b3/cp-metrics/0.log" Nov 28 12:21:39 crc kubenswrapper[5030]: I1128 12:21:39.613131 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vg7xg_032fe48e-074f-4471-80f1-940c9a22e1b3/controller/0.log" Nov 28 12:21:39 crc kubenswrapper[5030]: I1128 12:21:39.784767 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vg7xg_032fe48e-074f-4471-80f1-940c9a22e1b3/kube-rbac-proxy/0.log" Nov 28 12:21:39 crc kubenswrapper[5030]: I1128 12:21:39.787076 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vg7xg_032fe48e-074f-4471-80f1-940c9a22e1b3/frr-metrics/0.log" Nov 28 12:21:39 crc kubenswrapper[5030]: I1128 12:21:39.819454 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vg7xg_032fe48e-074f-4471-80f1-940c9a22e1b3/kube-rbac-proxy-frr/0.log" Nov 28 12:21:40 crc kubenswrapper[5030]: I1128 12:21:40.001968 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vg7xg_032fe48e-074f-4471-80f1-940c9a22e1b3/reloader/0.log" Nov 28 12:21:40 crc kubenswrapper[5030]: I1128 12:21:40.070457 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-m8487_1f4ef950-494c-4d87-8886-1386f04a3970/frr-k8s-webhook-server/0.log" Nov 28 12:21:40 crc kubenswrapper[5030]: I1128 12:21:40.305011 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-56c7ff6859-5qpcg_2f5fae05-87fd-4703-8262-540cbff62263/manager/0.log" Nov 28 12:21:40 crc kubenswrapper[5030]: I1128 12:21:40.414550 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vg7xg_032fe48e-074f-4471-80f1-940c9a22e1b3/frr/0.log" Nov 28 12:21:40 crc kubenswrapper[5030]: I1128 12:21:40.487082 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7c9d545dc4-92nd9_b56a2b73-e153-400a-9b6b-c7a20d9cbed6/webhook-server/0.log" Nov 28 12:21:40 crc kubenswrapper[5030]: I1128 12:21:40.576265 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gr75f_e4331617-c99d-4b39-a50e-004035983d31/kube-rbac-proxy/0.log" Nov 28 12:21:40 crc kubenswrapper[5030]: I1128 12:21:40.737308 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gr75f_e4331617-c99d-4b39-a50e-004035983d31/speaker/0.log" Nov 28 12:21:47 crc kubenswrapper[5030]: I1128 12:21:47.393333 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:21:47 crc kubenswrapper[5030]: E1128 12:21:47.394059 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:21:56 crc kubenswrapper[5030]: I1128 12:21:56.326637 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-0_1fc49197-af09-489d-a1cf-a6faef96e773/mysql-bootstrap/0.log" Nov 28 12:21:56 crc kubenswrapper[5030]: I1128 12:21:56.394786 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_keystone-5854d7bc86-t2mhb_e76b8eea-d098-4f8b-9048-991ad0e4c1da/keystone-api/0.log" Nov 28 12:21:56 crc kubenswrapper[5030]: I1128 12:21:56.466804 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-0_1fc49197-af09-489d-a1cf-a6faef96e773/mysql-bootstrap/0.log" Nov 28 12:21:56 crc kubenswrapper[5030]: I1128 12:21:56.571846 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-0_1fc49197-af09-489d-a1cf-a6faef96e773/galera/0.log" Nov 28 12:21:56 crc kubenswrapper[5030]: I1128 12:21:56.718062 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-1_71bc6057-afa8-4d14-8007-63a195454497/mysql-bootstrap/0.log" Nov 28 12:21:56 crc kubenswrapper[5030]: I1128 12:21:56.975048 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-1_71bc6057-afa8-4d14-8007-63a195454497/galera/0.log" Nov 28 12:21:56 crc kubenswrapper[5030]: I1128 12:21:56.980937 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-1_71bc6057-afa8-4d14-8007-63a195454497/mysql-bootstrap/0.log" Nov 28 12:21:57 crc kubenswrapper[5030]: I1128 12:21:57.208969 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-2_58f32b69-3330-4888-85e8-b3e0b0eed50c/mysql-bootstrap/0.log" Nov 28 12:21:57 crc kubenswrapper[5030]: I1128 12:21:57.430051 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-2_58f32b69-3330-4888-85e8-b3e0b0eed50c/galera/0.log" Nov 28 12:21:57 crc kubenswrapper[5030]: I1128 12:21:57.457993 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstack-galera-2_58f32b69-3330-4888-85e8-b3e0b0eed50c/mysql-bootstrap/0.log" Nov 28 12:21:57 crc kubenswrapper[5030]: I1128 12:21:57.697576 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstackclient_6092acec-456c-4682-8567-f20d6022b818/openstackclient/0.log" Nov 28 12:21:57 crc kubenswrapper[5030]: I1128 12:21:57.717915 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_rabbitmq-server-0_a569f835-2a0b-4752-8d4c-8a0c22524cfa/setup-container/0.log" Nov 28 12:21:58 crc kubenswrapper[5030]: I1128 12:21:58.008159 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_rabbitmq-server-0_a569f835-2a0b-4752-8d4c-8a0c22524cfa/rabbitmq/0.log" Nov 28 12:21:58 crc kubenswrapper[5030]: I1128 12:21:58.017829 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_rabbitmq-server-0_a569f835-2a0b-4752-8d4c-8a0c22524cfa/setup-container/0.log" Nov 28 12:21:58 crc kubenswrapper[5030]: I1128 12:21:58.063805 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_memcached-0_64d47945-e54a-49e9-acfb-40b62274a05b/memcached/0.log" Nov 28 12:21:58 crc kubenswrapper[5030]: I1128 12:21:58.223706 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-proxy-6bd58cfcf7-cslck_3caf6ea0-05f6-4415-8486-f0472d654719/proxy-httpd/0.log" Nov 28 12:21:58 crc kubenswrapper[5030]: I1128 12:21:58.252848 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-proxy-6bd58cfcf7-cslck_3caf6ea0-05f6-4415-8486-f0472d654719/proxy-server/0.log" Nov 28 12:21:58 crc kubenswrapper[5030]: I1128 12:21:58.259687 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-ring-rebalance-vdwnx_b04753c6-7d4f-472c-89b9-9ef512737377/swift-ring-rebalance/0.log" Nov 28 12:21:58 crc kubenswrapper[5030]: I1128 12:21:58.471857 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_c3818002-2687-4201-8ceb-f0272289cab9/account-auditor/0.log" Nov 28 12:21:58 crc kubenswrapper[5030]: I1128 12:21:58.473411 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_c3818002-2687-4201-8ceb-f0272289cab9/account-reaper/0.log" Nov 28 12:21:58 crc kubenswrapper[5030]: I1128 12:21:58.527666 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_c3818002-2687-4201-8ceb-f0272289cab9/account-replicator/0.log" Nov 28 12:21:58 crc kubenswrapper[5030]: I1128 12:21:58.635357 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_c3818002-2687-4201-8ceb-f0272289cab9/account-server/0.log" Nov 28 12:21:58 crc kubenswrapper[5030]: I1128 12:21:58.657714 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_c3818002-2687-4201-8ceb-f0272289cab9/container-replicator/0.log" Nov 28 12:21:58 crc kubenswrapper[5030]: I1128 12:21:58.669911 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_c3818002-2687-4201-8ceb-f0272289cab9/container-auditor/0.log" Nov 28 12:21:58 crc kubenswrapper[5030]: I1128 12:21:58.728320 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_c3818002-2687-4201-8ceb-f0272289cab9/container-server/0.log" Nov 28 12:21:58 crc kubenswrapper[5030]: I1128 12:21:58.850427 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_c3818002-2687-4201-8ceb-f0272289cab9/container-updater/0.log" Nov 28 12:21:58 crc kubenswrapper[5030]: I1128 12:21:58.870380 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_c3818002-2687-4201-8ceb-f0272289cab9/object-auditor/0.log" Nov 28 12:21:58 crc kubenswrapper[5030]: I1128 12:21:58.881666 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_c3818002-2687-4201-8ceb-f0272289cab9/object-expirer/0.log" Nov 28 12:21:58 crc kubenswrapper[5030]: I1128 12:21:58.908059 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_c3818002-2687-4201-8ceb-f0272289cab9/object-replicator/0.log" Nov 28 12:21:59 crc kubenswrapper[5030]: I1128 12:21:59.031669 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_c3818002-2687-4201-8ceb-f0272289cab9/object-server/0.log" Nov 28 12:21:59 crc kubenswrapper[5030]: I1128 12:21:59.047990 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_c3818002-2687-4201-8ceb-f0272289cab9/object-updater/0.log" Nov 28 12:21:59 crc kubenswrapper[5030]: I1128 12:21:59.086962 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_c3818002-2687-4201-8ceb-f0272289cab9/rsync/0.log" Nov 28 12:21:59 crc kubenswrapper[5030]: I1128 12:21:59.127816 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_swift-storage-0_c3818002-2687-4201-8ceb-f0272289cab9/swift-recon-cron/0.log" Nov 28 12:22:00 crc kubenswrapper[5030]: I1128 12:22:00.393192 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:22:00 crc kubenswrapper[5030]: E1128 12:22:00.393521 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:22:04 crc kubenswrapper[5030]: I1128 12:22:04.502204 5030 scope.go:117] "RemoveContainer" containerID="850a06ebdab719c534f763269f68d03b510d82ebfc12392ed05b0571ffb716f2" Nov 28 12:22:04 crc kubenswrapper[5030]: I1128 12:22:04.569544 5030 scope.go:117] "RemoveContainer" containerID="00a9daa56c4d2d280c0b4d53d6556758e5e2c86a07f6e63669152359f335196c" Nov 28 12:22:04 crc kubenswrapper[5030]: I1128 12:22:04.590626 5030 scope.go:117] "RemoveContainer" containerID="5c761ed53baee30ed2a9ccc0d5fe42622ed5103de16fbf0270ee8618ebed7342" Nov 28 12:22:04 crc kubenswrapper[5030]: I1128 12:22:04.644513 5030 scope.go:117] "RemoveContainer" containerID="3bd6815b1e5cb4fad77b7b632eb82e68678b930a1ad94a41bb4770ea5736afb8" Nov 28 12:22:04 crc kubenswrapper[5030]: I1128 12:22:04.679643 5030 scope.go:117] "RemoveContainer" containerID="ba87277b0b4715cd8db41bf91f3dc647a49ac3fb396e2a3d7700ce5ab3de8924" Nov 28 12:22:04 crc kubenswrapper[5030]: I1128 12:22:04.715377 5030 scope.go:117] "RemoveContainer" containerID="46e598fe14730e012da74919bd39e7c73b69416e53ecc6df1460ba87cbd9f72b" Nov 28 12:22:04 crc kubenswrapper[5030]: I1128 12:22:04.739860 5030 scope.go:117] "RemoveContainer" containerID="2335ef443a935f8fb54c371729cad25f22762eb839b768352ee13a8dcc992b70" Nov 28 12:22:14 crc kubenswrapper[5030]: I1128 12:22:14.491151 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm_64a19be7-4e6b-43eb-9ebd-93a60054b661/util/0.log" Nov 28 12:22:14 crc kubenswrapper[5030]: I1128 12:22:14.610870 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm_64a19be7-4e6b-43eb-9ebd-93a60054b661/util/0.log" Nov 28 12:22:14 crc kubenswrapper[5030]: I1128 12:22:14.657854 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm_64a19be7-4e6b-43eb-9ebd-93a60054b661/pull/0.log" Nov 28 12:22:14 crc kubenswrapper[5030]: I1128 12:22:14.658526 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm_64a19be7-4e6b-43eb-9ebd-93a60054b661/pull/0.log" Nov 28 12:22:14 crc kubenswrapper[5030]: I1128 12:22:14.866245 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm_64a19be7-4e6b-43eb-9ebd-93a60054b661/util/0.log" Nov 28 12:22:14 crc kubenswrapper[5030]: I1128 12:22:14.895756 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm_64a19be7-4e6b-43eb-9ebd-93a60054b661/pull/0.log" Nov 28 12:22:14 crc kubenswrapper[5030]: I1128 12:22:14.916178 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wc9mm_64a19be7-4e6b-43eb-9ebd-93a60054b661/extract/0.log" Nov 28 12:22:15 crc kubenswrapper[5030]: I1128 12:22:15.127422 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qv7zh_64e3a3cf-b757-4bc8-8b2e-acd2cd843e55/extract-utilities/0.log" Nov 28 12:22:15 crc kubenswrapper[5030]: I1128 12:22:15.271918 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qv7zh_64e3a3cf-b757-4bc8-8b2e-acd2cd843e55/extract-content/0.log" Nov 28 12:22:15 crc kubenswrapper[5030]: I1128 12:22:15.276690 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qv7zh_64e3a3cf-b757-4bc8-8b2e-acd2cd843e55/extract-utilities/0.log" Nov 28 12:22:15 crc kubenswrapper[5030]: I1128 12:22:15.299338 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qv7zh_64e3a3cf-b757-4bc8-8b2e-acd2cd843e55/extract-content/0.log" Nov 28 12:22:15 crc kubenswrapper[5030]: I1128 12:22:15.393341 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:22:15 crc kubenswrapper[5030]: E1128 12:22:15.393758 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:22:15 crc kubenswrapper[5030]: I1128 12:22:15.456271 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qv7zh_64e3a3cf-b757-4bc8-8b2e-acd2cd843e55/extract-content/0.log" Nov 28 12:22:15 crc kubenswrapper[5030]: I1128 12:22:15.478314 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qv7zh_64e3a3cf-b757-4bc8-8b2e-acd2cd843e55/extract-utilities/0.log" Nov 28 12:22:15 crc kubenswrapper[5030]: I1128 12:22:15.704627 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tztrm_f3b6b1e4-08cb-4867-b88a-ee08ddcaa045/extract-utilities/0.log" Nov 28 12:22:15 crc kubenswrapper[5030]: I1128 12:22:15.832382 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qv7zh_64e3a3cf-b757-4bc8-8b2e-acd2cd843e55/registry-server/0.log" Nov 28 12:22:15 crc kubenswrapper[5030]: I1128 12:22:15.983549 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tztrm_f3b6b1e4-08cb-4867-b88a-ee08ddcaa045/extract-utilities/0.log" Nov 28 12:22:16 crc kubenswrapper[5030]: I1128 12:22:16.048736 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tztrm_f3b6b1e4-08cb-4867-b88a-ee08ddcaa045/extract-content/0.log" Nov 28 12:22:16 crc kubenswrapper[5030]: I1128 12:22:16.236329 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tztrm_f3b6b1e4-08cb-4867-b88a-ee08ddcaa045/extract-content/0.log" Nov 28 12:22:16 crc kubenswrapper[5030]: I1128 12:22:16.426862 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tztrm_f3b6b1e4-08cb-4867-b88a-ee08ddcaa045/extract-utilities/0.log" Nov 28 12:22:16 crc kubenswrapper[5030]: I1128 12:22:16.444122 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tztrm_f3b6b1e4-08cb-4867-b88a-ee08ddcaa045/extract-content/0.log" Nov 28 12:22:16 crc kubenswrapper[5030]: I1128 12:22:16.703693 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ntjwt_da571a9b-f5ae-4bcf-b98c-f92299206a54/marketplace-operator/0.log" Nov 28 12:22:16 crc kubenswrapper[5030]: I1128 12:22:16.856273 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmp59_4ffcaa37-8853-409e-aeff-52278c6f2028/extract-utilities/0.log" Nov 28 12:22:16 crc kubenswrapper[5030]: I1128 12:22:16.897247 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tztrm_f3b6b1e4-08cb-4867-b88a-ee08ddcaa045/registry-server/0.log" Nov 28 12:22:17 crc kubenswrapper[5030]: I1128 12:22:17.046299 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmp59_4ffcaa37-8853-409e-aeff-52278c6f2028/extract-utilities/0.log" Nov 28 12:22:17 crc kubenswrapper[5030]: I1128 12:22:17.073394 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmp59_4ffcaa37-8853-409e-aeff-52278c6f2028/extract-content/0.log" Nov 28 12:22:17 crc kubenswrapper[5030]: I1128 12:22:17.084138 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmp59_4ffcaa37-8853-409e-aeff-52278c6f2028/extract-content/0.log" Nov 28 12:22:17 crc kubenswrapper[5030]: I1128 12:22:17.293606 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmp59_4ffcaa37-8853-409e-aeff-52278c6f2028/extract-content/0.log" Nov 28 12:22:17 crc kubenswrapper[5030]: I1128 12:22:17.295089 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmp59_4ffcaa37-8853-409e-aeff-52278c6f2028/extract-utilities/0.log" Nov 28 12:22:17 crc kubenswrapper[5030]: I1128 12:22:17.372324 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pmp59_4ffcaa37-8853-409e-aeff-52278c6f2028/registry-server/0.log" Nov 28 12:22:17 crc kubenswrapper[5030]: I1128 12:22:17.518953 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ssp9x_951b2dc6-7d8d-4f04-8c86-572af9af6000/extract-utilities/0.log" Nov 28 12:22:17 crc kubenswrapper[5030]: I1128 12:22:17.712913 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ssp9x_951b2dc6-7d8d-4f04-8c86-572af9af6000/extract-content/0.log" Nov 28 12:22:17 crc kubenswrapper[5030]: I1128 12:22:17.742791 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ssp9x_951b2dc6-7d8d-4f04-8c86-572af9af6000/extract-utilities/0.log" Nov 28 12:22:17 crc kubenswrapper[5030]: I1128 12:22:17.749882 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ssp9x_951b2dc6-7d8d-4f04-8c86-572af9af6000/extract-content/0.log" Nov 28 12:22:17 crc kubenswrapper[5030]: I1128 12:22:17.903512 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ssp9x_951b2dc6-7d8d-4f04-8c86-572af9af6000/extract-utilities/0.log" Nov 28 12:22:17 crc kubenswrapper[5030]: I1128 12:22:17.960865 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ssp9x_951b2dc6-7d8d-4f04-8c86-572af9af6000/extract-content/0.log" Nov 28 12:22:18 crc kubenswrapper[5030]: I1128 12:22:18.266392 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ssp9x_951b2dc6-7d8d-4f04-8c86-572af9af6000/registry-server/0.log" Nov 28 12:22:28 crc kubenswrapper[5030]: I1128 12:22:28.393431 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:22:28 crc kubenswrapper[5030]: E1128 12:22:28.394507 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:22:39 crc kubenswrapper[5030]: I1128 12:22:39.392867 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:22:39 crc kubenswrapper[5030]: E1128 12:22:39.395785 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:22:51 crc kubenswrapper[5030]: I1128 12:22:51.393310 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:22:51 crc kubenswrapper[5030]: E1128 12:22:51.394420 5030 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cqr62_openshift-machine-config-operator(d8e6d4c7-9635-4925-bf75-96379201ef67)\"" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" Nov 28 12:23:04 crc kubenswrapper[5030]: I1128 12:23:04.405986 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c" Nov 28 12:23:04 crc kubenswrapper[5030]: I1128 12:23:04.869007 5030 scope.go:117] "RemoveContainer" containerID="db3ce55ae441325cbb66ce7308255af7316f1626da2dc15bf2f011df6581197f" Nov 28 12:23:04 crc kubenswrapper[5030]: I1128 12:23:04.910252 5030 scope.go:117] "RemoveContainer" containerID="7b939c7d257ece5946fb1fd4f0b0e192f7bcbe3c31e82c7362dfb14d2e29ded7" Nov 28 12:23:04 crc kubenswrapper[5030]: I1128 12:23:04.945725 5030 scope.go:117] "RemoveContainer" containerID="cb539a9036dbc76fd8be1f623f7ad3e610e49929da53725aaf973cf90165cf77" Nov 28 12:23:05 crc kubenswrapper[5030]: I1128 12:23:05.160948 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerStarted","Data":"e62dd2433894957d5f99f97a07fcc6a9855c0574e6ad9c002210a68c830fadf7"} Nov 28 12:23:30 crc kubenswrapper[5030]: I1128 12:23:30.421185 5030 generic.go:334] "Generic (PLEG): container finished" podID="01afb59a-3bf5-47f1-8256-b96dd205649a" containerID="da32bfd98af30cfa94dac44fc9a51cfb136ac3804f50c163cd3efe8514409f40" exitCode=0 Nov 28 12:23:30 crc kubenswrapper[5030]: I1128 12:23:30.421316 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ddkqf/must-gather-snk7m" event={"ID":"01afb59a-3bf5-47f1-8256-b96dd205649a","Type":"ContainerDied","Data":"da32bfd98af30cfa94dac44fc9a51cfb136ac3804f50c163cd3efe8514409f40"} Nov 28 12:23:30 crc kubenswrapper[5030]: I1128 12:23:30.423611 5030 scope.go:117] "RemoveContainer" containerID="da32bfd98af30cfa94dac44fc9a51cfb136ac3804f50c163cd3efe8514409f40" Nov 28 12:23:30 crc kubenswrapper[5030]: I1128 12:23:30.527732 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ddkqf_must-gather-snk7m_01afb59a-3bf5-47f1-8256-b96dd205649a/gather/0.log" Nov 28 12:23:37 crc kubenswrapper[5030]: I1128 12:23:37.231678 5030 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ddkqf/must-gather-snk7m"] Nov 28 12:23:37 crc kubenswrapper[5030]: I1128 12:23:37.232877 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ddkqf/must-gather-snk7m" podUID="01afb59a-3bf5-47f1-8256-b96dd205649a" containerName="copy" containerID="cri-o://99d726e78740b3e869d06719097343e4eae36819239ce1dd6911e8475e8b5c17" gracePeriod=2 Nov 28 12:23:37 crc kubenswrapper[5030]: I1128 12:23:37.253811 5030 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ddkqf/must-gather-snk7m"] Nov 28 12:23:37 crc kubenswrapper[5030]: I1128 12:23:37.509979 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ddkqf_must-gather-snk7m_01afb59a-3bf5-47f1-8256-b96dd205649a/copy/0.log" Nov 28 12:23:37 crc kubenswrapper[5030]: I1128 12:23:37.511197 5030 generic.go:334] "Generic (PLEG): container finished" podID="01afb59a-3bf5-47f1-8256-b96dd205649a" containerID="99d726e78740b3e869d06719097343e4eae36819239ce1dd6911e8475e8b5c17" exitCode=143 Nov 28 12:23:37 crc kubenswrapper[5030]: I1128 12:23:37.610288 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ddkqf_must-gather-snk7m_01afb59a-3bf5-47f1-8256-b96dd205649a/copy/0.log" Nov 28 12:23:37 crc kubenswrapper[5030]: I1128 12:23:37.610799 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddkqf/must-gather-snk7m" Nov 28 12:23:37 crc kubenswrapper[5030]: I1128 12:23:37.775209 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/01afb59a-3bf5-47f1-8256-b96dd205649a-must-gather-output\") pod \"01afb59a-3bf5-47f1-8256-b96dd205649a\" (UID: \"01afb59a-3bf5-47f1-8256-b96dd205649a\") " Nov 28 12:23:37 crc kubenswrapper[5030]: I1128 12:23:37.775369 5030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94fn2\" (UniqueName: \"kubernetes.io/projected/01afb59a-3bf5-47f1-8256-b96dd205649a-kube-api-access-94fn2\") pod \"01afb59a-3bf5-47f1-8256-b96dd205649a\" (UID: \"01afb59a-3bf5-47f1-8256-b96dd205649a\") " Nov 28 12:23:37 crc kubenswrapper[5030]: I1128 12:23:37.791227 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01afb59a-3bf5-47f1-8256-b96dd205649a-kube-api-access-94fn2" (OuterVolumeSpecName: "kube-api-access-94fn2") pod "01afb59a-3bf5-47f1-8256-b96dd205649a" (UID: "01afb59a-3bf5-47f1-8256-b96dd205649a"). InnerVolumeSpecName "kube-api-access-94fn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 12:23:37 crc kubenswrapper[5030]: I1128 12:23:37.872553 5030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01afb59a-3bf5-47f1-8256-b96dd205649a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "01afb59a-3bf5-47f1-8256-b96dd205649a" (UID: "01afb59a-3bf5-47f1-8256-b96dd205649a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 12:23:37 crc kubenswrapper[5030]: I1128 12:23:37.878652 5030 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/01afb59a-3bf5-47f1-8256-b96dd205649a-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 28 12:23:37 crc kubenswrapper[5030]: I1128 12:23:37.878698 5030 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94fn2\" (UniqueName: \"kubernetes.io/projected/01afb59a-3bf5-47f1-8256-b96dd205649a-kube-api-access-94fn2\") on node \"crc\" DevicePath \"\"" Nov 28 12:23:38 crc kubenswrapper[5030]: I1128 12:23:38.401865 5030 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01afb59a-3bf5-47f1-8256-b96dd205649a" path="/var/lib/kubelet/pods/01afb59a-3bf5-47f1-8256-b96dd205649a/volumes" Nov 28 12:23:38 crc kubenswrapper[5030]: I1128 12:23:38.521028 5030 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ddkqf_must-gather-snk7m_01afb59a-3bf5-47f1-8256-b96dd205649a/copy/0.log" Nov 28 12:23:38 crc kubenswrapper[5030]: I1128 12:23:38.521540 5030 scope.go:117] "RemoveContainer" containerID="99d726e78740b3e869d06719097343e4eae36819239ce1dd6911e8475e8b5c17" Nov 28 12:23:38 crc kubenswrapper[5030]: I1128 12:23:38.521653 5030 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddkqf/must-gather-snk7m" Nov 28 12:23:38 crc kubenswrapper[5030]: I1128 12:23:38.540824 5030 scope.go:117] "RemoveContainer" containerID="da32bfd98af30cfa94dac44fc9a51cfb136ac3804f50c163cd3efe8514409f40" Nov 28 12:24:05 crc kubenswrapper[5030]: I1128 12:24:05.082445 5030 scope.go:117] "RemoveContainer" containerID="2d305980c505e7d29e7230dcb196f99b0da6ef9b887dc4e8204e9f78eb1b86c8" Nov 28 12:24:05 crc kubenswrapper[5030]: I1128 12:24:05.139591 5030 scope.go:117] "RemoveContainer" containerID="6e1f779c21fd85d02bbfa8e1d38bda1b11f942d277c0f4998be3b75a904388cb" Nov 28 12:24:05 crc kubenswrapper[5030]: I1128 12:24:05.170805 5030 scope.go:117] "RemoveContainer" containerID="10330f82554a6e0e3a5473a0a2b0a1fbe5ee2c77c286a56650d98ae627554083" Nov 28 12:24:05 crc kubenswrapper[5030]: I1128 12:24:05.231209 5030 scope.go:117] "RemoveContainer" containerID="b11a1f31aa8fc72c072c5cdf734fd2d71d4da4901ceafcc49a99a0b105bd63f2" Nov 28 12:24:05 crc kubenswrapper[5030]: I1128 12:24:05.262762 5030 scope.go:117] "RemoveContainer" containerID="5822aee640d6e74d7cf2e863976531fb4cd97b342d02d650cbd63bbe605cf24c" Nov 28 12:25:33 crc kubenswrapper[5030]: I1128 12:25:33.201922 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:25:33 crc kubenswrapper[5030]: I1128 12:25:33.202745 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:26:03 crc kubenswrapper[5030]: I1128 12:26:03.202393 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:26:03 crc kubenswrapper[5030]: I1128 12:26:03.203344 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:26:05 crc kubenswrapper[5030]: I1128 12:26:05.471764 5030 scope.go:117] "RemoveContainer" containerID="927e0fe2573eceb564ed28d253b7d0df10adfb4c70a395ec3444e2f734d903a5" Nov 28 12:26:33 crc kubenswrapper[5030]: I1128 12:26:33.202832 5030 patch_prober.go:28] interesting pod/machine-config-daemon-cqr62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 12:26:33 crc kubenswrapper[5030]: I1128 12:26:33.203537 5030 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 12:26:33 crc kubenswrapper[5030]: I1128 12:26:33.203603 5030 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" Nov 28 12:26:33 crc kubenswrapper[5030]: I1128 12:26:33.204590 5030 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e62dd2433894957d5f99f97a07fcc6a9855c0574e6ad9c002210a68c830fadf7"} pod="openshift-machine-config-operator/machine-config-daemon-cqr62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 12:26:33 crc kubenswrapper[5030]: I1128 12:26:33.204690 5030 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" podUID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerName="machine-config-daemon" containerID="cri-o://e62dd2433894957d5f99f97a07fcc6a9855c0574e6ad9c002210a68c830fadf7" gracePeriod=600 Nov 28 12:26:34 crc kubenswrapper[5030]: I1128 12:26:34.281400 5030 generic.go:334] "Generic (PLEG): container finished" podID="d8e6d4c7-9635-4925-bf75-96379201ef67" containerID="e62dd2433894957d5f99f97a07fcc6a9855c0574e6ad9c002210a68c830fadf7" exitCode=0 Nov 28 12:26:34 crc kubenswrapper[5030]: I1128 12:26:34.281514 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerDied","Data":"e62dd2433894957d5f99f97a07fcc6a9855c0574e6ad9c002210a68c830fadf7"} Nov 28 12:26:34 crc kubenswrapper[5030]: I1128 12:26:34.282224 5030 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cqr62" event={"ID":"d8e6d4c7-9635-4925-bf75-96379201ef67","Type":"ContainerStarted","Data":"ed9c7eb5621c71d615300fc87381a9466e549661c53adc446b3506a0674fed91"} Nov 28 12:26:34 crc kubenswrapper[5030]: I1128 12:26:34.282258 5030 scope.go:117] "RemoveContainer" containerID="8554f995fc6075fb0451de7636cc9123c67c53fe5f1dc7f9b8ab19404b57e49c"